Project Name: AI-Driven Decision Support for Supply Chain Optimisation¶

Author: Mostafa Moazzen
Date: August 27, 2024
Contact: m.moazzen-2022@hull.ac.uk


Project Description¶

This notebook provides code implementations of various deep machine learning models designed to optimize supply chain processes.

Guide¶

Please use table of content in left sidebar to navigate through code.


libraries¶

In [1]:
# import libraries
# libraries were added in the related section
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import plotly.express as px
import warnings
import plotly.graph_objs as go
import plotly.offline as py
import os
import datetime
In [2]:
pd.set_option('display.max_columns', None)
warnings.filterwarnings("ignore")
In [3]:
# reading dataset
df = pd.read_csv("SCMS_Delivery_History_Dataset.csv")
In [4]:
df['Item Description'].nunique()
Out[4]:
184
In [5]:
df['Product Group'].value_counts()
Out[5]:
Product Group
ARV     8550
HRDT    1728
ANTM      22
ACT       16
MRDT       8
Name: count, dtype: int64
In [6]:
df
Out[6]:
ID Project Code PQ # PO / SO # ASN/DN # Country Managed By Fulfill Via Vendor INCO Term Shipment Mode PQ First Sent to Client Date PO Sent to Vendor Date Scheduled Delivery Date Delivered to Client Date Delivery Recorded Date Product Group Sub Classification Vendor Item Description Molecule/Test Type Brand Dosage Dosage Form Unit of Measure (Per Pack) Line Item Quantity Line Item Value Pack Price Unit Price Manufacturing Site First Line Designation Weight (Kilograms) Freight Cost (USD) Line Item Insurance (USD)
0 1 100-CI-T01 Pre-PQ Process SCMS-4 ASN-8 Côte d'Ivoire PMO - US Direct Drop EXW Air Pre-PQ Process Date Not Captured 2-Jun-06 2-Jun-06 2-Jun-06 HRDT HIV test RANBAXY Fine Chemicals LTD. HIV, Reveal G3 Rapid HIV-1 Antibody Test, 30 T... HIV, Reveal G3 Rapid HIV-1 Antibody Test Reveal NaN Test kit 30 19 551.00 29.00 0.97 Ranbaxy Fine Chemicals LTD Yes 13 780.34 NaN
1 3 108-VN-T01 Pre-PQ Process SCMS-13 ASN-85 Vietnam PMO - US Direct Drop EXW Air Pre-PQ Process Date Not Captured 14-Nov-06 14-Nov-06 14-Nov-06 ARV Pediatric Aurobindo Pharma Limited Nevirapine 10mg/ml, oral suspension, Bottle, 2... Nevirapine Generic 10mg/ml Oral suspension 240 1000 6200.00 6.20 0.03 Aurobindo Unit III, India Yes 358 4521.5 NaN
2 4 100-CI-T01 Pre-PQ Process SCMS-20 ASN-14 Côte d'Ivoire PMO - US Direct Drop FCA Air Pre-PQ Process Date Not Captured 27-Aug-06 27-Aug-06 27-Aug-06 HRDT HIV test Abbott GmbH & Co. KG HIV 1/2, Determine Complete HIV Kit, 100 Tests HIV 1/2, Determine Complete HIV Kit Determine NaN Test kit 100 500 40000.00 80.00 0.80 ABBVIE GmbH & Co.KG Wiesbaden Yes 171 1653.78 NaN
3 15 108-VN-T01 Pre-PQ Process SCMS-78 ASN-50 Vietnam PMO - US Direct Drop EXW Air Pre-PQ Process Date Not Captured 1-Sep-06 1-Sep-06 1-Sep-06 ARV Adult SUN PHARMACEUTICAL INDUSTRIES LTD (RANBAXY LAB... Lamivudine 150mg, tablets, 60 Tabs Lamivudine Generic 150mg Tablet 60 31920 127360.80 3.99 0.07 Ranbaxy, Paonta Shahib, India Yes 1855 16007.06 NaN
4 16 108-VN-T01 Pre-PQ Process SCMS-81 ASN-55 Vietnam PMO - US Direct Drop EXW Air Pre-PQ Process Date Not Captured 11-Aug-06 11-Aug-06 11-Aug-06 ARV Adult Aurobindo Pharma Limited Stavudine 30mg, capsules, 60 Caps Stavudine Generic 30mg Capsule 60 38000 121600.00 3.20 0.05 Aurobindo Unit III, India Yes 7590 45450.08 NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
10319 86818 103-ZW-T30 FPQ-15197 SO-50020 DN-4307 Zimbabwe PMO - US From RDC N/A - From RDC Truck 10/16/14 N/A - From RDC 31-Jul-15 15-Jul-15 20-Jul-15 ARV Pediatric SCMS from RDC Lamivudine/Nevirapine/Zidovudine 30/50/60mg, d... Lamivudine/Nevirapine/Zidovudine Generic 30/50/60mg Chewable/dispersible tablet - FDC 60 166571 599655.60 3.60 0.06 Mylan, H-12 & H-13, India No See DN-4307 (ID#:83920) See DN-4307 (ID#:83920) 705.79
10320 86819 104-CI-T30 FPQ-15259 SO-50102 DN-4313 Côte d'Ivoire PMO - US From RDC N/A - From RDC Truck 10/24/14 N/A - From RDC 31-Jul-15 6-Aug-15 7-Aug-15 ARV Adult SCMS from RDC Lamivudine/Zidovudine 150/300mg, tablets, 60 Tabs Lamivudine/Zidovudine Generic 150/300mg Tablet - FDC 60 21072 137389.44 6.52 0.11 Hetero Unit III Hyderabad IN No See DN-4313 (ID#:83921) See DN-4313 (ID#:83921) 161.71
10321 86821 110-ZM-T30 FPQ-14784 SO-49600 DN-4316 Zambia PMO - US From RDC N/A - From RDC Truck 8/12/14 N/A - From RDC 31-Aug-15 25-Aug-15 3-Sep-15 ARV Adult SCMS from RDC Efavirenz/Lamivudine/Tenofovir Disoproxil Fuma... Efavirenz/Lamivudine/Tenofovir Disoproxil Fuma... Generic 600/300/300mg Tablet - FDC 30 514526 5140114.74 9.99 0.33 Cipla Ltd A-42 MIDC Mahar. IN No Weight Captured Separately Freight Included in Commodity Cost 5284.04
10322 86822 200-ZW-T30 FPQ-16523 SO-51680 DN-4334 Zimbabwe PMO - US From RDC N/A - From RDC Truck 7/1/15 N/A - From RDC 9-Sep-15 4-Aug-15 11-Aug-15 ARV Adult SCMS from RDC Lamivudine/Zidovudine 150/300mg, tablets, 60 Tabs Lamivudine/Zidovudine Generic 150/300mg Tablet - FDC 60 17465 113871.80 6.52 0.11 Mylan (formerly Matrix) Nashik Yes 1392 Freight Included in Commodity Cost 134.03
10323 86823 103-ZW-T30 FPQ-15197 SO-50022 DN-4336 Zimbabwe PMO - US From RDC N/A - From RDC Truck 10/16/14 N/A - From RDC 31-Aug-15 4-Aug-15 11-Aug-15 ARV Pediatric SCMS from RDC Lamivudine/Zidovudine 30/60mg, dispersible tab... Lamivudine/Zidovudine Generic 30/60mg Chewable/dispersible tablet - FDC 60 36639 72911.61 1.99 0.03 Cipla, Goa, India No Weight Captured Separately Freight Included in Commodity Cost 85.82

10324 rows × 33 columns

Data Cleaning¶

In [7]:
# Displaying basic information to understand data types and missing values
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10324 entries, 0 to 10323
Data columns (total 33 columns):
 #   Column                        Non-Null Count  Dtype  
---  ------                        --------------  -----  
 0   ID                            10324 non-null  int64  
 1   Project Code                  10324 non-null  object 
 2   PQ #                          10324 non-null  object 
 3   PO / SO #                     10324 non-null  object 
 4   ASN/DN #                      10324 non-null  object 
 5   Country                       10324 non-null  object 
 6   Managed By                    10324 non-null  object 
 7   Fulfill Via                   10324 non-null  object 
 8   Vendor INCO Term              10324 non-null  object 
 9   Shipment Mode                 9964 non-null   object 
 10  PQ First Sent to Client Date  10324 non-null  object 
 11  PO Sent to Vendor Date        10324 non-null  object 
 12  Scheduled Delivery Date       10324 non-null  object 
 13  Delivered to Client Date      10324 non-null  object 
 14  Delivery Recorded Date        10324 non-null  object 
 15  Product Group                 10324 non-null  object 
 16  Sub Classification            10324 non-null  object 
 17  Vendor                        10324 non-null  object 
 18  Item Description              10324 non-null  object 
 19  Molecule/Test Type            10324 non-null  object 
 20  Brand                         10324 non-null  object 
 21  Dosage                        8588 non-null   object 
 22  Dosage Form                   10324 non-null  object 
 23  Unit of Measure (Per Pack)    10324 non-null  int64  
 24  Line Item Quantity            10324 non-null  int64  
 25  Line Item Value               10324 non-null  float64
 26  Pack Price                    10324 non-null  float64
 27  Unit Price                    10324 non-null  float64
 28  Manufacturing Site            10324 non-null  object 
 29  First Line Designation        10324 non-null  object 
 30  Weight (Kilograms)            10324 non-null  object 
 31  Freight Cost (USD)            10324 non-null  object 
 32  Line Item Insurance (USD)     10037 non-null  float64
dtypes: float64(4), int64(3), object(26)
memory usage: 2.6+ MB

Standardizing columns names¶

In [8]:
# Standardizing country names
df['Country'] = df['Country'].str.strip().str.title()
In [9]:
# Renaming columns for better readability
df.rename(columns={
    'PQ #': 'PQ_Number',
    'PO / SO #': 'PO_SO_Number',
    'ASN/DN #': 'ASN_DN_Number',
    'Unit of Measure (Per Pack)': 'Unit_of_Measure_Per_Pack',
    'Line Item Quantity': 'Line_Item_Quantity',
    'Line Item Value': 'Line_Item_Value',
    'Pack Price': 'Pack_Price',
    'Unit Price': 'Unit_Price',
    'Weight (Kilograms)': 'Weight_Kilograms',
    'Freight Cost (USD)': 'Freight_Cost_USD',
    'Line Item Insurance (USD)': 'Line_Item_Insurance_USD'
}, inplace=True)
In [10]:
# checking for null values
df.isnull().sum()
Out[10]:
ID                                 0
Project Code                       0
PQ_Number                          0
PO_SO_Number                       0
ASN_DN_Number                      0
Country                            0
Managed By                         0
Fulfill Via                        0
Vendor INCO Term                   0
Shipment Mode                    360
PQ First Sent to Client Date       0
PO Sent to Vendor Date             0
Scheduled Delivery Date            0
Delivered to Client Date           0
Delivery Recorded Date             0
Product Group                      0
Sub Classification                 0
Vendor                             0
Item Description                   0
Molecule/Test Type                 0
Brand                              0
Dosage                          1736
Dosage Form                        0
Unit_of_Measure_Per_Pack           0
Line_Item_Quantity                 0
Line_Item_Value                    0
Pack_Price                         0
Unit_Price                         0
Manufacturing Site                 0
First Line Designation             0
Weight_Kilograms                   0
Freight_Cost_USD                   0
Line_Item_Insurance_USD          287
dtype: int64
In [ ]:
 

Impute NAN with KNN for Freight cost and Weight¶

In [11]:
from sklearn.impute import KNNImputer
In [12]:
# Selecting relevant features for KNN imputation
data_for_imputation = df[['Freight_Cost_USD', 'Weight_Kilograms', 'Unit_of_Measure_Per_Pack', 'Line_Item_Quantity', 'Line_Item_Value',
                          'Pack_Price','Unit_Price', 'Line_Item_Insurance_USD']]

# Convert columns to appropriate data types
data_for_imputation = data_for_imputation.apply(pd.to_numeric, errors='coerce')
In [288]:
# heatmap for null values
sns.heatmap(data_for_imputation.isnull(), cmap ="Blues")
Out[288]:
<Axes: >
No description has been provided for this image
In [13]:
# Initialize the KNN imputer
imputer = KNNImputer(n_neighbors=5)

# Perform the imputation
data_imputed = imputer.fit_transform(data_for_imputation)

# Convert the imputed data back to a DataFrame
data_imputed_df = pd.DataFrame(data_imputed, columns=data_for_imputation.columns)

# Insert the imputed column back into the original dataframe
df['Freight_Cost_USD'] = data_imputed_df['Freight_Cost_USD']
df['Weight_Kilograms'] = data_imputed_df['Weight_Kilograms']
#df['Line_Item_Insurance_USD'] = data_imputed_df['Line_Item_Insurance_USD']
In [14]:
# checking for null values
data_imputed_df.isnull().sum()
Out[14]:
Freight_Cost_USD            0
Weight_Kilograms            0
Unit_of_Measure_Per_Pack    0
Line_Item_Quantity          0
Line_Item_Value             0
Pack_Price                  0
Unit_Price                  0
Line_Item_Insurance_USD     0
dtype: int64
In [15]:
df['Freight_Cost_USD'].value_counts()
Out[15]:
Freight_Cost_USD
9736.100     36
6147.180     27
2600.798     17
7445.800     16
13398.060    16
             ..
5613.630      1
13179.800     1
710.000       1
4354.800      1
16770.866     1
Name: count, Length: 8698, dtype: int64
In [16]:
# checking for null values
df.isnull().sum()
Out[16]:
ID                                 0
Project Code                       0
PQ_Number                          0
PO_SO_Number                       0
ASN_DN_Number                      0
Country                            0
Managed By                         0
Fulfill Via                        0
Vendor INCO Term                   0
Shipment Mode                    360
PQ First Sent to Client Date       0
PO Sent to Vendor Date             0
Scheduled Delivery Date            0
Delivered to Client Date           0
Delivery Recorded Date             0
Product Group                      0
Sub Classification                 0
Vendor                             0
Item Description                   0
Molecule/Test Type                 0
Brand                              0
Dosage                          1736
Dosage Form                        0
Unit_of_Measure_Per_Pack           0
Line_Item_Quantity                 0
Line_Item_Value                    0
Pack_Price                         0
Unit_Price                         0
Manufacturing Site                 0
First Line Designation             0
Weight_Kilograms                   0
Freight_Cost_USD                   0
Line_Item_Insurance_USD          287
dtype: int64

Convert date into datetime format and fill Nan with mean value¶

In [17]:
from datetime import timedelta
In [18]:
#Convering dates into datetime format. For 'PQ First Sent to Client Date' ,'PO Sent to Vendor Date ' . Coerce the errors as some of the dates are not defined.
dt = ["PQ First Sent to Client Date" ,'PO Sent to Vendor Date','Scheduled Delivery Date','Delivered to Client Date', 'Delivery Recorded Date']
for col in dt:
    df[col] = pd.to_datetime(df[col], errors = 'coerce')
In [19]:
# checking for null values
df.isnull().sum()
Out[19]:
ID                                 0
Project Code                       0
PQ_Number                          0
PO_SO_Number                       0
ASN_DN_Number                      0
Country                            0
Managed By                         0
Fulfill Via                        0
Vendor INCO Term                   0
Shipment Mode                    360
PQ First Sent to Client Date    2681
PO Sent to Vendor Date          5732
Scheduled Delivery Date            0
Delivered to Client Date           0
Delivery Recorded Date             0
Product Group                      0
Sub Classification                 0
Vendor                             0
Item Description                   0
Molecule/Test Type                 0
Brand                              0
Dosage                          1736
Dosage Form                        0
Unit_of_Measure_Per_Pack           0
Line_Item_Quantity                 0
Line_Item_Value                    0
Pack_Price                         0
Unit_Price                         0
Manufacturing Site                 0
First Line Designation             0
Weight_Kilograms                   0
Freight_Cost_USD                   0
Line_Item_Insurance_USD          287
dtype: int64
In [20]:
# Fill PO & PQ dates
#Calculate Average days between Price Quote-->Purchase Order--> Scheduled Delivery
pq_del_days = round((df['Scheduled Delivery Date'] - df['PQ First Sent to Client Date']).dt.days.mean(),0)
pq_po_days = round((df['PO Sent to Vendor Date'] - df['PQ First Sent to Client Date']).dt.days.mean(),0)
po_del_days = round((df['Scheduled Delivery Date'] - df['PO Sent to Vendor Date']).dt.days.mean(),0)
In [21]:
print (pq_del_days)
print (pq_po_days)
print (po_del_days)
172.0
54.0
106.0
In [22]:
# Assigning estimated dates of Price Quotation and Purchase Order
df['PQ First Sent to Client Date'] = df['PQ First Sent to Client Date'].fillna(df['Scheduled Delivery Date'] - timedelta(days=pq_del_days))
df['PO Sent to Vendor Date'] = df['PO Sent to Vendor Date'].fillna(df['Scheduled Delivery Date'] - timedelta(days=po_del_days))
In [23]:
# checking for null values
df.isnull().sum()
Out[23]:
ID                                 0
Project Code                       0
PQ_Number                          0
PO_SO_Number                       0
ASN_DN_Number                      0
Country                            0
Managed By                         0
Fulfill Via                        0
Vendor INCO Term                   0
Shipment Mode                    360
PQ First Sent to Client Date       0
PO Sent to Vendor Date             0
Scheduled Delivery Date            0
Delivered to Client Date           0
Delivery Recorded Date             0
Product Group                      0
Sub Classification                 0
Vendor                             0
Item Description                   0
Molecule/Test Type                 0
Brand                              0
Dosage                          1736
Dosage Form                        0
Unit_of_Measure_Per_Pack           0
Line_Item_Quantity                 0
Line_Item_Value                    0
Pack_Price                         0
Unit_Price                         0
Manufacturing Site                 0
First Line Designation             0
Weight_Kilograms                   0
Freight_Cost_USD                   0
Line_Item_Insurance_USD          287
dtype: int64

other columns Nan values¶

In [24]:
# Fill Insurance with percentage value of line item value
perc = df['Line_Item_Insurance_USD'].sum() / df['Line_Item_Value'][df['Line_Item_Insurance_USD'] >= 0].sum()
df['Line_Item_Insurance_USD'] = df['Line_Item_Insurance_USD'].fillna(round(df['Line_Item_Value']*perc, 2))
In [25]:
# Replace NAN with mode in Dosage column
df['Dosage'] = df['Dosage'].fillna(df['Dosage'].mode()[0])
In [26]:
# Replace NAN with mode in Shipment Mode column
df['Shipment Mode'] = df['Shipment Mode'].fillna(df['Shipment Mode'].mode()[0])
In [27]:
'''# Drop rows with no shipment mode
missing_shipment = df[df['Shipment Mode'].isna()].index
df = df.drop(missing_shipment, axis=0).reset_index(drop= True)'''
Out[27]:
"# Drop rows with no shipment mode\nmissing_shipment = df[df['Shipment Mode'].isna()].index\ndf = df.drop(missing_shipment, axis=0).reset_index(drop= True)"
In [28]:
# Removing duplicates
df.drop_duplicates(inplace=True)
In [29]:
# checking for null values
df.isnull().sum()
Out[29]:
ID                              0
Project Code                    0
PQ_Number                       0
PO_SO_Number                    0
ASN_DN_Number                   0
Country                         0
Managed By                      0
Fulfill Via                     0
Vendor INCO Term                0
Shipment Mode                   0
PQ First Sent to Client Date    0
PO Sent to Vendor Date          0
Scheduled Delivery Date         0
Delivered to Client Date        0
Delivery Recorded Date          0
Product Group                   0
Sub Classification              0
Vendor                          0
Item Description                0
Molecule/Test Type              0
Brand                           0
Dosage                          0
Dosage Form                     0
Unit_of_Measure_Per_Pack        0
Line_Item_Quantity              0
Line_Item_Value                 0
Pack_Price                      0
Unit_Price                      0
Manufacturing Site              0
First Line Designation          0
Weight_Kilograms                0
Freight_Cost_USD                0
Line_Item_Insurance_USD         0
dtype: int64
In [30]:
df.dropna(inplace=True)

Data Augmentation¶

In [31]:
# Aggregate data by month, summing only numeric columns
df['Month'] = df['Delivered to Client Date'].dt.to_period('M')

Bottleneck Identification¶

In [32]:
# Calculate delivery delay in days
df['Delivery Delay (Days)'] = (df['Delivery Recorded Date'] - df['Scheduled Delivery Date']).dt.days
# Calculate the delivery duration as days
df['delivery_duration'] = (df['Delivered to Client Date'] - df['PO Sent to Vendor Date']).dt.days
In [33]:
# Group by shipment mode and date to analyze bottlenecks by mode
shipment_mode_analysis = df.groupby([df['PO Sent to Vendor Date'].dt.date, 'Shipment Mode']).agg({
    'Delivery Delay (Days)': ['mean', 'sum', 'count'],  # Avg, Total, and number of shipments with delays
    'Weight_Kilograms': 'sum',
    'Freight_Cost_USD': 'sum'
}).reset_index()
In [34]:
shipment_mode_analysis
Out[34]:
PO Sent to Vendor Date Shipment Mode Delivery Delay (Days) Weight_Kilograms Freight_Cost_USD
mean sum count sum sum
0 2006-01-16 Air 0.000000 0 1 11.0 515.320
1 2006-01-24 Air 0.000000 0 1 15.0 633.730
2 2006-02-16 Air 0.000000 0 1 13.0 780.340
3 2006-03-30 Air 0.000000 0 1 15.0 1039.500
4 2006-04-06 Air 0.000000 0 1 568.6 2962.710
... ... ... ... ... ... ... ...
2273 2015-08-04 Air 7.666667 23 3 3446.8 20913.576
2274 2015-08-11 Truck 0.000000 0 1 183.0 2600.798
2275 2015-08-24 Truck 0.000000 0 16 15097.0 24200.524
2276 2015-09-16 Air -120.000000 -120 1 620.8 4154.730
2277 2015-09-16 Truck -142.000000 -142 1 90.0 6160.068

2278 rows × 7 columns

In [35]:
# Flatten the column names
shipment_mode_analysis.columns = ['Date', 'Shipment Mode', 'Avg_Delay_Days', 'Total_Delay_Days', 'Shipment_Count', 'Total_Weight', 'Total_Freight_Cost']

# Plot using Plotly for interactive filtering
fig = px.line(
    shipment_mode_analysis, 
    x='Date', 
    y='Avg_Delay_Days', 
    color='Shipment Mode', 
    title='Average Delivery Delay (Days) by Shipment Mode Over Time',
    labels={'Avg_Delay_Days': 'Avg Delay (Days)'},
    hover_data=['Total_Delay_Days', 'Shipment_Count', 'Total_Weight', 'Total_Freight_Cost']
)

# Adjust the plot size
fig.update_layout(
    xaxis_title='Date', 
    yaxis_title='Avg Delay (Days)', 
    legend_title_text='Shipment Mode',
    width=1100,  # Set the width of the plot
    height=800   # Set the height of the plot
)

# Add interactivity: Enable filtering by shipment mode
fig.update_layout(
    updatemenus=[
        {
            "buttons": [
                {
                    "label": "All",
                    "method": "update",
                    "args": [{"visible": [True] * len(fig.data)}]
                }
            ] + [
                {
                    "label": mode,
                    "method": "update",
                    "args": [{"visible": [mode == trace.name for trace in fig.data]}]
                } for mode in shipment_mode_analysis['Shipment Mode'].unique()
            ],
            "direction": "down",
            "showactive": True,
        }
    ]
)

# Display the interactive plot
fig.show()
In [36]:
from sklearn.cluster import KMeans
In [37]:
# Prepare the data for clustering (we only need the 'Delivery Delay (Days)' feature)
delay_data = df[['Delivery Delay (Days)']].dropna()  # Remove rows with NaN delays
In [38]:
# Apply K-means clustering
kmeans = KMeans(n_clusters=3, random_state=42)  # Adjust the number of clusters as needed
kmeans.fit(delay_data)
Out[38]:
KMeans(n_clusters=3, random_state=42)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KMeans(n_clusters=3, random_state=42)
In [39]:
# Add the cluster labels to the dataset
df['Delay Cluster'] = kmeans.labels_

# Determine the threshold by finding the mean delay of the cluster with the highest delay
cluster_centers = kmeans.cluster_centers_
threshold = max(cluster_centers)[0]  # This is our bottleneck threshold
In [40]:
cluster_centers
Out[40]:
array([[ 66.25      ],
       [ -1.27084224],
       [-85.6942446 ]])
In [41]:
# Add the Bottleneck feature based on the new threshold
df['Bottleneck'] = df['Delivery Delay (Days)'].apply(lambda x: 1 if x > threshold else 0)
In [42]:
# Display a sample of rows where bottlenecks occurred
bottleneck_rows = df[df['Bottleneck'] == 1]
bottleneck_rows.head()
Out[42]:
ID Project Code PQ_Number PO_SO_Number ASN_DN_Number Country Managed By Fulfill Via Vendor INCO Term Shipment Mode PQ First Sent to Client Date PO Sent to Vendor Date Scheduled Delivery Date Delivered to Client Date Delivery Recorded Date Product Group Sub Classification Vendor Item Description Molecule/Test Type Brand Dosage Dosage Form Unit_of_Measure_Per_Pack Line_Item_Quantity Line_Item_Value Pack_Price Unit_Price Manufacturing Site First Line Designation Weight_Kilograms Freight_Cost_USD Line_Item_Insurance_USD Month Delivery Delay (Days) delivery_duration Delay Cluster Bottleneck
635 6411 116-ZA-T01 Pre-PQ Process SCMS-41463 ASN-5420 South Africa PMO - US Direct Drop DDP Air 2009-01-04 2009-03-18 2009-06-25 2009-10-30 2009-10-30 ARV Pediatric S. BUYS WHOLESALER Lamivudine 10mg/ml, oral solution, Bottle, 240 ml Lamivudine Generic 10mg/ml Oral solution 240 3304 18667.60 5.65 0.02 Aurobindo Unit III, India Yes 1032.6 6404.258 40.25 2009-10 127 226 0 1
925 9564 116-ZA-T01 Pre-PQ Process SCMS-16600 ASN-2292 South Africa PMO - US Direct Drop DDP Air 2007-08-02 2008-01-04 2008-01-21 2008-04-25 2008-04-25 ARV Pediatric JSI R&T INSTITUTE, INC. Didanosine 2g [Videx], powder for oral solutio... Didanosine Videx 2g Powder for oral solution 200 4 57.92 14.48 0.07 BMS Meymac, France Yes 16.2 1017.844 0.09 2008-04 95 112 0 1
2741 13926 161-ZA-T30 FPQ-5303 SCMS-68750 ASN-8885 South Africa PMO - US Direct Drop DDP Air 2010-03-18 2010-03-19 2010-10-13 2011-01-25 2011-01-25 ARV Adult CIPLA LIMITED Tenofovir Disoproxil Fumarate 300mg, tablets, ... Tenofovir Disoproxil Fumarate Generic 300mg Tablet 30 19992 156337.44 7.82 0.26 Cipla, Goa, India Yes 1378.0 3646.100 337.06 2011-01 104 312 0 1
2759 14241 111-MZ-T30 FPQ-13134 SCMS-215600 ASN-24703 Mozambique PMO - US Direct Drop EXW Air 2013-11-12 2013-11-26 2014-04-09 2014-04-09 2014-07-08 ARV Adult Aurobindo Pharma Limited Didanosine 250mg, delayed-release capsules, 30... Didanosine Generic 250mg Delayed-release capsules 30 800 6064.00 7.58 0.25 Aurobindo Unit III, India Yes 77.0 979.140 6.23 2014-04 90 134 0 1
2782 14542 161-ZA-T30 FPQ-5303 SCMS-68980 ASN-9647 South Africa PMO - US Direct Drop DDP Air 2010-03-18 2010-03-19 2010-08-24 2011-03-04 2011-03-04 ARV Pediatric Aurobindo Pharma Limited Lamivudine 10mg/ml, oral solution w/syringe, B... Lamivudine Generic 10mg/ml Oral solution 240 6395 19185.00 3.00 0.01 Aurobindo Unit III, India Yes 6466.8 6394.358 41.36 2011-03 192 350 0 1
In [43]:
# Display the threshold and some statistics
print(f"Bottleneck Threshold determined by K-means: {threshold:.2f} days")
Bottleneck Threshold determined by K-means: 66.25 days
In [44]:
# Optional: Plot the clusters for visualization
plt.scatter(delay_data, [0]*len(delay_data), c=kmeans.labels_, cmap='viridis')
plt.axvline(x=threshold, color='red', linestyle='--', label=f'Threshold: {threshold:.2f} days')
plt.xlabel('Delivery Delay (Days)')
plt.title('K-means Clustering of Delivery Delays')
plt.legend()
plt.show()
No description has been provided for this image

Add Risk Factor to dataset¶

In [45]:
# Function to determine Risk_Factor
def risk_factor(delay):
    if delay <= 0:
        return 'L'
    elif delay <= threshold:
        return 'M'
    else:
        return 'H'

# Apply the function to create the Risk_Factor column
df['Risk_Factor'] = df['Delivery Delay (Days)'].apply(risk_factor)
In [46]:
df['Risk_Factor'].value_counts()
Out[46]:
Risk_Factor
L    8092
M    2097
H     135
Name: count, dtype: int64

EDA¶

In [309]:
# Distribution Plots
numerical_columns = ['Line_Item_Quantity', 'Line_Item_Value', 'Pack_Price', 'Unit_Price', 'Weight_Kilograms', 
                     'Freight_Cost_USD','Line_Item_Insurance_USD', 'delivery_duration', 'Delivery Delay (Days)']
In [48]:
# Histograms
fig, axes = plt.subplots(nrows=5, ncols=2, figsize=(12, 12))
for ax, col in zip(axes.flatten(), numerical_columns):
    sns.histplot(df[col], ax=ax, kde=True)
    ax.set_title(f'Histogram of {col}')
plt.tight_layout()
plt.show()

# Box Plots
fig, axes = plt.subplots(nrows=5, ncols=2, figsize=(12, 12))
for ax, col in zip(axes.flatten(), numerical_columns):
    sns.boxplot(data=df, y=col, ax=ax)
    ax.set_title(f'Boxplot of {col}')
plt.tight_layout()
plt.show()
No description has been provided for this image
No description has been provided for this image
In [302]:
import plotly.express as px
import plotly.subplots as sp

#categorical_columns = ['Country', 'Product Group', 'Fulfill Via',  'Shipment Mode', 'Risk_Factor']

categorical_columns = ['Country', 'Fulfill Via', 'Shipment Mode', 'Product Group', 'Sub Classification', 
    'Dosage', 'Manufacturing Site', 
    'First Line Designation', 'Risk_Factor']

# Create subplots: 3 rows, 2 columns
fig = sp.make_subplots(rows=5, cols=2, subplot_titles=categorical_columns, vertical_spacing=0.1, horizontal_spacing=0.1)

for i, col in enumerate(categorical_columns):
    # Generate frequency counts
    counts = df[col].value_counts().reset_index()
    counts.columns = [col, 'Count']

    # Create bar plot
    bar_plot = px.bar(counts, y=col, x='Count', orientation='h')

    # Update each subplot
    for trace in bar_plot['data']:
        fig.add_trace(trace, row=(i // 2) + 1, col=(i % 2) + 1)

# Update layout for better aesthetics
fig.update_layout(
    height=1000, width=1200,
    title_text="Frequency of Categorical Columns",
    showlegend=False,
    title_x=0.5,
    margin=dict(l=50, r=50, t=80, b=50),
    font=dict(size=12)
)

fig.show()
In [311]:
from sklearn.preprocessing import LabelEncoder

# Separate categorical and numerical columns
categorical_columns = ['Country', 'Fulfill Via', 'Shipment Mode', 'Product Group', 'Manufacturing Site']

# Initialize the LabelEncoder
label_encoder = LabelEncoder()

# Create a copy of the dataframe for encoded columns
df_encoded = df.copy()

# Encode each categorical column
for column in categorical_columns:
    df_encoded[f'{column}_encoded'] = label_encoder.fit_transform(df[column])

# Combine encoded categorical columns and numerical columns
columns_to_include = [f'{column}_encoded' for column in categorical_columns] + numerical_columns

# Generate the correlation matrix for selected columns
corr_matrix = df_encoded[columns_to_include].corr()

# Plot the correlation heatmap
plt.figure(figsize=(10, 8))  # Adjust the size as needed
#sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', fmt='.2f')
sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', fmt='.2f', annot_kws={"size": 8})  # Adjust the font size using "size"
plt.title('Correlation Heatmap')
plt.show()
No description has been provided for this image

Product_group Quantity Monthly Trends¶

In [52]:
df['Month_timestamp'] = df['Month'].dt.to_timestamp()
In [53]:
# Group the data by 'Month' and 'Product Group' and calculate the sum of 'Line_Item_Quantity'
monthly_quantity = df.groupby(['Month_timestamp', 'Product Group'])['Line_Item_Quantity'].sum().reset_index()
In [54]:
monthly_quantity
Out[54]:
Month_timestamp Product Group Line_Item_Quantity
0 2006-05-01 HRDT 75
1 2006-06-01 HRDT 166
2 2006-07-01 HRDT 50506
3 2006-08-01 ARV 93000
4 2006-08-01 HRDT 1019
... ... ... ...
253 2015-07-01 HRDT 46237
254 2015-08-01 ANTM 26
255 2015-08-01 ARV 2255185
256 2015-08-01 HRDT 8380
257 2015-09-01 ARV 171323

258 rows × 3 columns

In [56]:
# Plot using Plotly for interactive filtering
fig = px.line(
    monthly_quantity, 
    x='Month_timestamp', 
    y='Line_Item_Quantity', 
    color='Product Group', 
    title='Line Item Quantity by Product Group Grouped by Month',
    labels={'Line_Item_Quantity': 'Total Line Item Quantity'},
    markers=True
)

# Adjust the plot size
fig.update_layout(
    xaxis_title='Month_timestamp', 
    yaxis_title='Total Line Item Quantity', 
    legend_title_text='Product Group',
    width=1000,  # Set the width of the plot
    height=600   # Set the height of the plot
)

# Add interactivity: Enable filtering by product group
fig.update_layout(
    updatemenus=[
        {
            "buttons": [
                {
                    "label": "All",
                    "method": "update",
                    "args": [{"visible": [True] * len(fig.data)}]
                }
            ] + [
                {
                    "label": product_group,
                    "method": "update",
                    "args": [{"visible": [trace.name == product_group for trace in fig.data]}]
                } for product_group in monthly_quantity['Product Group'].unique()
            ],
            "direction": "down",
            "showactive": True,
        }
    ]
)
# Display the interactive plot
fig.show()

Outliers detection and removal¶

In [57]:
# Check Skewness of all feature for checking outliers
plt.figure(figsize = (10, 15))
for i, col in enumerate(df.select_dtypes(include=['float64', 'int64']).columns[:-1], 1):
    plt.subplot(6, 3, i)
    skewness = df[col].skew()
    sns.distplot(df[col], kde = True, label = "Skew = %2f" %(skewness))
    plt.title(f"Skewness of {col} Data")
    plt.tight_layout()
    plt.legend(loc = "best")
    plt.xticks(rotation = 90)
    plt.plot()
No description has been provided for this image
In [58]:
# Function to remove outliers using IQR method
def remove_outliers_iqr(df, column):
    Q1 = df[column].quantile(0.25)
    Q3 = df[column].quantile(0.75)
    IQR = Q3 - Q1
    lower_bound = Q1 - 1.5 * IQR
    upper_bound = Q3 + 1.5 * IQR
    return df[(df[column] >= lower_bound) & (df[column] <= upper_bound)]

# Select relevant features for the ConvLSTM model
remove_outlier_features = ['Line_Item_Quantity', 'Line_Item_Value','Weight_Kilograms','Freight_Cost_USD']
# Remove outliers for each numerical column
for column in remove_outlier_features:
    df_no_outlier = remove_outliers_iqr(df, column)
In [59]:
csv_file_path = 'no_outlir_output.csv'
df_no_outlier.to_csv(csv_file_path, index=False)
In [60]:
# Check Skewness of all feature for checking outliers
plt.figure(figsize = (10, 15))
for i, col in enumerate(df_no_outlier.select_dtypes(include=['float64', 'int64']).columns[:-1], 1):
    plt.subplot(6, 3, i)
    skewness = df_no_outlier[col].skew()
    sns.distplot(df[col], kde = True, label = "Skew = %2f" %(skewness))
    plt.title(f"Skewness of {col} Data")
    plt.tight_layout()
    plt.legend(loc = "best")
    plt.xticks(rotation = 90)
    plt.plot()
No description has been provided for this image

Freight Cost Prediction¶

Regression Models for Freight Cost Prediction¶

In [313]:
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor

# Encoding categorical variables
categorical_features = ['Country', 'Managed By', 'Fulfill Via', 'Vendor INCO Term', 'Shipment Mode', 'Manufacturing Site']
numerical_features = ['Line_Item_Quantity', 'Line_Item_Value', 'Pack_Price', 'Unit_Price', 'Weight_Kilograms','Line_Item_Insurance_USD']

# Defining the target and features
X = df[categorical_features + numerical_features]
y = df['Freight_Cost_USD']

# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Creating preprocessing pipelines
numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('encoder', OneHotEncoder(handle_unknown='ignore'))
])

preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numerical_features),
        ('cat', categorical_transformer, categorical_features)
    ])

# Define models
models = {
   
    'Random Forest': RandomForestRegressor(random_state=42),
    'Gradient Boosting': GradientBoostingRegressor(random_state=42),
    'XGBoost': XGBRegressor(random_state=42),
    'LightGBM': LGBMRegressor(random_state=42)
    
    
}

# Function to evaluate models
def evaluate_model(model, X_train, X_test, y_train, y_test):
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    mae = mean_absolute_error(y_test, y_pred)
    mse = mean_squared_error(y_test, y_pred)
    rmse = np.sqrt(mse)
    r2 = r2_score(y_test, y_pred)
    cv_scores = cross_val_score(model, X_train, y_train, cv=5, scoring='neg_mean_squared_error')
    cv_rmse_scores = np.sqrt(-cv_scores)
    return mae, mse, rmse, r2, cv_rmse_scores.mean(), y_pred

# Evaluate each model
results = {}
predictions = {}
for name, model in models.items():
    pipeline = Pipeline(steps=[
        ('preprocessor', preprocessor),
        ('regressor', model)
    ])
    mae, mse, rmse, r2, cv_rmse, y_pred = evaluate_model(pipeline, X_train, X_test, y_train, y_test)
    results[name] = {
        'MAE': mae,
        'MSE': mse,
        'RMSE': rmse,
        'R²': r2,
        'CV RMSE': cv_rmse
    }
    predictions[name] = y_pred

# Display results
results_df = pd.DataFrame(results).T
print(results_df)
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001642 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 1555
[LightGBM] [Info] Number of data points in the train set: 8259, number of used features: 76
[LightGBM] [Info] Start training from score 9699.417132
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001393 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1547
[LightGBM] [Info] Number of data points in the train set: 6607, number of used features: 73
[LightGBM] [Info] Start training from score 9691.171316
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000980 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 1545
[LightGBM] [Info] Number of data points in the train set: 6607, number of used features: 74
[LightGBM] [Info] Start training from score 9657.052019
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001331 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1532
[LightGBM] [Info] Number of data points in the train set: 6607, number of used features: 72
[LightGBM] [Info] Start training from score 9684.073417
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001424 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1548
[LightGBM] [Info] Number of data points in the train set: 6607, number of used features: 74
[LightGBM] [Info] Start training from score 9724.847980
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001395 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1544
[LightGBM] [Info] Number of data points in the train set: 6608, number of used features: 73
[LightGBM] [Info] Start training from score 9739.934796
                           MAE           MSE         RMSE        R²  \
Random Forest      3708.260359  5.835009e+07  7638.723230  0.678160   
Gradient Boosting  4238.850527  6.604541e+07  8126.832627  0.635715   
XGBoost            3851.110525  6.974573e+07  8351.390621  0.615306   
LightGBM           4009.997808  6.182095e+07  7862.629693  0.659016   

                       CV RMSE  
Random Forest      8135.403955  
Gradient Boosting  8412.441627  
XGBoost            8244.423133  
LightGBM           8313.273753  
In [78]:
# Visualize predicted vs actual values for each model
for name, y_pred in predictions.items():
    plt.figure(figsize=(10, 6))
    plt.scatter(y_test, y_pred, alpha=0.3)
    plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], '--r', linewidth=2)
    plt.xlim(-1000, 100000)  # Adjust the values as needed
    plt.ylim(-1000, 100000)
    plt.title(f'{name} Predicted vs Actual')
    plt.xlabel('Actual Freight Cost')
    plt.ylabel('Predicted Freight Cost')
    plt.show()

'''# Plot residuals for each model
for name, y_pred in predictions.items():
    residuals = y_test - y_pred
    plt.figure(figsize=(10, 6))
    plt.scatter(y_pred, residuals, alpha=0.3)
    plt.axhline(0, color='r', linestyle='--', linewidth=2)
    plt.xlim(-1000, 100000)  # Adjust the values as needed
   # plt.ylim(-1000, 100000)
    plt.title(f'{name} Residuals')
    plt.xlabel('Predicted Freight Cost')
    plt.ylabel('Residuals')
    plt.show()'''
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
Out[78]:
"# Plot residuals for each model\nfor name, y_pred in predictions.items():\n    residuals = y_test - y_pred\n    plt.figure(figsize=(10, 6))\n    plt.scatter(y_pred, residuals, alpha=0.3)\n    plt.axhline(0, color='r', linestyle='--', linewidth=2)\n    plt.xlim(-1000, 100000)  # Adjust the values as needed\n   # plt.ylim(-1000, 100000)\n    plt.title(f'{name} Residuals')\n    plt.xlabel('Predicted Freight Cost')\n    plt.ylabel('Residuals')\n    plt.show()"

Regression Model for Freight Cost (no_outliers DF)¶

In [73]:
# Encoding categorical variables
categorical_features = ['Country', 'Managed By', 'Fulfill Via', 'Vendor INCO Term', 'Shipment Mode', 'Manufacturing Site']
numerical_features = ['Line_Item_Quantity', 'Line_Item_Value', 'Pack_Price', 'Unit_Price', 'Weight_Kilograms','Line_Item_Insurance_USD']

# Defining the target and features
X = df_no_outlier[categorical_features + numerical_features]
y = df_no_outlier['Freight_Cost_USD']

# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Creating preprocessing pipelines
numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('encoder', OneHotEncoder(handle_unknown='ignore'))
])

preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numerical_features),
        ('cat', categorical_transformer, categorical_features)
    ])

# Define models
models = {
    'Random Forest': RandomForestRegressor(random_state=42),
    'Gradient Boosting': GradientBoostingRegressor(random_state=42),
    'XGBoost': XGBRegressor(random_state=42),
    'LightGBM': LGBMRegressor(random_state=42)
}

# Function to evaluate models
def evaluate_model(model, X_train, X_test, y_train, y_test):
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    mae = mean_absolute_error(y_test, y_pred)
    mse = mean_squared_error(y_test, y_pred)
    rmse = np.sqrt(mse)
    r2 = r2_score(y_test, y_pred)
    cv_scores = cross_val_score(model, X_train, y_train, cv=5, scoring='neg_mean_squared_error')
    cv_rmse_scores = np.sqrt(-cv_scores)
    return mae, mse, rmse, r2, cv_rmse_scores.mean(), y_pred

# Evaluate each model
results = {}
predictions = {}
for name, model in models.items():
    pipeline = Pipeline(steps=[
        ('preprocessor', preprocessor),
        ('regressor', model)
    ])
    mae, mse, rmse, r2, cv_rmse, y_pred = evaluate_model(pipeline, X_train, X_test, y_train, y_test)
    results[name] = {
        'MAE': mae,
        'MSE': mse,
        'RMSE': rmse,
        'R²': r2,
        'CV RMSE': cv_rmse
    }
    predictions[name] = y_pred

# Display results
results_df = pd.DataFrame(results).T
print(results_df)
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000224 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 1554
[LightGBM] [Info] Number of data points in the train set: 7720, number of used features: 76
[LightGBM] [Info] Start training from score 7164.086327
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000385 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1537
[LightGBM] [Info] Number of data points in the train set: 6176, number of used features: 71
[LightGBM] [Info] Start training from score 7119.595838
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000494 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1541
[LightGBM] [Info] Number of data points in the train set: 6176, number of used features: 72
[LightGBM] [Info] Start training from score 7176.904347
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000346 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1541
[LightGBM] [Info] Number of data points in the train set: 6176, number of used features: 73
[LightGBM] [Info] Start training from score 7170.440847
[LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000236 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
[LightGBM] [Info] Total Bins 1541
[LightGBM] [Info] Number of data points in the train set: 6176, number of used features: 74
[LightGBM] [Info] Start training from score 7179.872915
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000364 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 1540
[LightGBM] [Info] Number of data points in the train set: 6176, number of used features: 73
[LightGBM] [Info] Start training from score 7173.617689
                           MAE           MSE         RMSE        R²  \
Random Forest      2406.583725  1.407644e+07  3751.858242  0.653297   
Gradient Boosting  2771.770342  1.615087e+07  4018.814259  0.602204   
XGBoost            2478.482355  1.426163e+07  3776.456733  0.648736   
LightGBM           2466.741867  1.371172e+07  3702.934082  0.662280   

                       CV RMSE  
Random Forest      3967.566263  
Gradient Boosting  4118.019020  
XGBoost            3952.038806  
LightGBM           3901.247880  
In [76]:
# Visualize predicted vs actual values for each model
for name, y_pred in predictions.items():
    plt.figure(figsize=(10, 6))
    plt.scatter(y_test, y_pred, alpha=0.3)
    plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], '--r', linewidth=2)
    plt.xlim(-1000, 30000)  # Adjust the values as needed
    plt.ylim(-1000, 30000)
    plt.title(f'{name} Predicted vs Actual')
    plt.xlabel('Actual Freight Cost')
    plt.ylabel('Predicted Freight Cost')
    plt.show()
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

Predict Freight Cost Using Neural Network¶

In [80]:
from sklearn.compose import ColumnTransformer
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential

# Encoding categorical variables
categorical_features = ['Country', 'Managed By', 'Fulfill Via', 'Vendor INCO Term', 'Shipment Mode' ,'Manufacturing Site']
numerical_features = ['Line_Item_Quantity', 'Line_Item_Value', 'Pack_Price', 'Unit_Price', 'Weight_Kilograms']

# Defining the target and features
X = df[categorical_features + numerical_features]
y = df['Freight_Cost_USD']

# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Creating preprocessing pipelines
numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('encoder', OneHotEncoder(handle_unknown='ignore'))
])

preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numerical_features),
        ('cat', categorical_transformer, categorical_features)
    ])

# Preprocess the data
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)

# Define the neural network model
model = Sequential()
model.add(Dense(64, input_dim=X_train.shape[1], activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(1))


# Summary of the model
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ dense (Dense)                        │ (None, 64)                  │           9,792 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_1 (Dense)                      │ (None, 32)                  │           2,080 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_2 (Dense)                      │ (None, 16)                  │             528 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_3 (Dense)                      │ (None, 1)                   │              17 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 12,417 (48.50 KB)
 Trainable params: 12,417 (48.50 KB)
 Non-trainable params: 0 (0.00 B)
In [81]:
# Compile the model
model.compile(optimizer=Adam(learning_rate=0.001), loss='mean_squared_error')

# Train the model
history = model.fit(X_train, y_train, epochs=200, batch_size=32, validation_split=0.2, verbose=1)

# Make predictions
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)

# Inverse transform the predictions if necessary (depends on how the preprocessing was done)
# y_pred_train = scaler.inverse_transform(y_pred_train)
# y_pred_test = scaler.inverse_transform(y_pred_test)

# Calculate performance metrics
train_mae = mean_absolute_error(y_train, y_pred_train)
train_mse = mean_squared_error(y_train, y_pred_train)
train_rmse = np.sqrt(train_mse)
train_r2 = r2_score(y_train, y_pred_train)

test_mae = mean_absolute_error(y_test, y_pred_test)
test_mse = mean_squared_error(y_test, y_pred_test)
test_rmse = np.sqrt(test_mse)
test_r2 = r2_score(y_test, y_pred_test)

print(f'Train MAE: {train_mae}')
print(f'Train MSE: {train_mse}')
print(f'Train RMSE: {train_rmse}')
print(f'Train R²: {train_r2}')

print(f'Test MAE: {test_mae}')
print(f'Test MSE: {test_mse}')
print(f'Test RMSE: {test_rmse}')
print(f'Test R²: {test_r2}')
Epoch 1/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 4ms/step - loss: 260368048.0000 - val_loss: 229904592.0000
Epoch 2/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 198904304.0000 - val_loss: 150303936.0000
Epoch 3/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 130408976.0000 - val_loss: 149223856.0000
Epoch 4/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 127466872.0000 - val_loss: 151839360.0000
Epoch 5/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111844256.0000 - val_loss: 154469984.0000
Epoch 6/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 123613040.0000 - val_loss: 158633584.0000
Epoch 7/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 142790400.0000 - val_loss: 159331712.0000
Epoch 8/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 140534528.0000 - val_loss: 162453904.0000
Epoch 9/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97857280.0000 - val_loss: 166471136.0000
Epoch 10/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 129356488.0000 - val_loss: 167832144.0000
Epoch 11/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112272720.0000 - val_loss: 171982784.0000
Epoch 12/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111718600.0000 - val_loss: 173797984.0000
Epoch 13/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 116090032.0000 - val_loss: 174224976.0000
Epoch 14/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 133366992.0000 - val_loss: 176474496.0000
Epoch 15/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 123044424.0000 - val_loss: 178405344.0000
Epoch 16/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 114496296.0000 - val_loss: 181823264.0000
Epoch 17/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 122848496.0000 - val_loss: 182295680.0000
Epoch 18/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 113154936.0000 - val_loss: 185817216.0000
Epoch 19/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109324056.0000 - val_loss: 188482784.0000
Epoch 20/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109825672.0000 - val_loss: 193531952.0000
Epoch 21/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 104534464.0000 - val_loss: 196033072.0000
Epoch 22/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110458328.0000 - val_loss: 196369328.0000
Epoch 23/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 122800040.0000 - val_loss: 197961200.0000
Epoch 24/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112736712.0000 - val_loss: 200102928.0000
Epoch 25/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 126039976.0000 - val_loss: 203272512.0000
Epoch 26/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 123085800.0000 - val_loss: 199862512.0000
Epoch 27/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 115569080.0000 - val_loss: 203810656.0000
Epoch 28/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 137050272.0000 - val_loss: 203621280.0000
Epoch 29/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 140546208.0000 - val_loss: 204644384.0000
Epoch 30/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98984424.0000 - val_loss: 205991056.0000
Epoch 31/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 136919216.0000 - val_loss: 210595792.0000
Epoch 32/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 126344808.0000 - val_loss: 209449696.0000
Epoch 33/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 121525128.0000 - val_loss: 210327616.0000
Epoch 34/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106272408.0000 - val_loss: 212786480.0000
Epoch 35/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 122141608.0000 - val_loss: 213508400.0000
Epoch 36/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97639128.0000 - val_loss: 216361808.0000
Epoch 37/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117970480.0000 - val_loss: 214394064.0000
Epoch 38/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93444096.0000 - val_loss: 221568288.0000
Epoch 39/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112518872.0000 - val_loss: 220980928.0000
Epoch 40/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 104825616.0000 - val_loss: 222665344.0000
Epoch 41/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101732280.0000 - val_loss: 224666592.0000
Epoch 42/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 116600128.0000 - val_loss: 221117360.0000
Epoch 43/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 113878392.0000 - val_loss: 225137760.0000
Epoch 44/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 145236912.0000 - val_loss: 220880816.0000
Epoch 45/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 108959280.0000 - val_loss: 223675104.0000
Epoch 46/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106771744.0000 - val_loss: 228739984.0000
Epoch 47/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109793136.0000 - val_loss: 224288624.0000
Epoch 48/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101948632.0000 - val_loss: 229995680.0000
Epoch 49/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 108242280.0000 - val_loss: 229488064.0000
Epoch 50/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98313776.0000 - val_loss: 226728976.0000
Epoch 51/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 115674616.0000 - val_loss: 227050080.0000
Epoch 52/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106091000.0000 - val_loss: 231209120.0000
Epoch 53/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101899912.0000 - val_loss: 227747360.0000
Epoch 54/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110745288.0000 - val_loss: 228342304.0000
Epoch 55/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 104875624.0000 - val_loss: 230161280.0000
Epoch 56/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 104459304.0000 - val_loss: 232108736.0000
Epoch 57/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110975952.0000 - val_loss: 232652208.0000
Epoch 58/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106226936.0000 - val_loss: 232063792.0000
Epoch 59/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 99908616.0000 - val_loss: 231090496.0000
Epoch 60/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 96954888.0000 - val_loss: 234817776.0000
Epoch 61/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 115535624.0000 - val_loss: 227811888.0000
Epoch 62/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97754312.0000 - val_loss: 233338176.0000
Epoch 63/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 120428256.0000 - val_loss: 234610528.0000
Epoch 64/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 108611920.0000 - val_loss: 236524032.0000
Epoch 65/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101141800.0000 - val_loss: 235878304.0000
Epoch 66/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94253008.0000 - val_loss: 235415008.0000
Epoch 67/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 140683296.0000 - val_loss: 236988464.0000
Epoch 68/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 88990976.0000 - val_loss: 231237664.0000
Epoch 69/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97346720.0000 - val_loss: 231879440.0000
Epoch 70/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 95268440.0000 - val_loss: 231259920.0000
Epoch 71/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109858040.0000 - val_loss: 234398416.0000
Epoch 72/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117137952.0000 - val_loss: 233419120.0000
Epoch 73/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 116778312.0000 - val_loss: 235290304.0000
Epoch 74/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105120208.0000 - val_loss: 238874768.0000
Epoch 75/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 102252800.0000 - val_loss: 232402672.0000
Epoch 76/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105292128.0000 - val_loss: 230616416.0000
Epoch 77/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 96556976.0000 - val_loss: 234478624.0000
Epoch 78/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 129319176.0000 - val_loss: 230383968.0000
Epoch 79/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109402760.0000 - val_loss: 233633408.0000
Epoch 80/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111101920.0000 - val_loss: 234022912.0000
Epoch 81/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 103061544.0000 - val_loss: 232660336.0000
Epoch 82/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106327304.0000 - val_loss: 233190304.0000
Epoch 83/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110929584.0000 - val_loss: 236000288.0000
Epoch 84/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98938984.0000 - val_loss: 231995776.0000
Epoch 85/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 121248936.0000 - val_loss: 228533984.0000
Epoch 86/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98012976.0000 - val_loss: 229906960.0000
Epoch 87/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94831288.0000 - val_loss: 231763664.0000
Epoch 88/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 107153224.0000 - val_loss: 232665616.0000
Epoch 89/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 104900944.0000 - val_loss: 231026704.0000
Epoch 90/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 116305216.0000 - val_loss: 230767408.0000
Epoch 91/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 122682336.0000 - val_loss: 226326016.0000
Epoch 92/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 126135144.0000 - val_loss: 229686688.0000
Epoch 93/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 115614448.0000 - val_loss: 234459200.0000
Epoch 94/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101359544.0000 - val_loss: 233564752.0000
Epoch 95/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 113533352.0000 - val_loss: 230436928.0000
Epoch 96/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 114582600.0000 - val_loss: 236357984.0000
Epoch 97/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111126304.0000 - val_loss: 233557872.0000
Epoch 98/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 96923208.0000 - val_loss: 235486704.0000
Epoch 99/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 126856496.0000 - val_loss: 231486560.0000
Epoch 100/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 92814648.0000 - val_loss: 236576928.0000
Epoch 101/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117851656.0000 - val_loss: 234435296.0000
Epoch 102/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 121785296.0000 - val_loss: 236568064.0000
Epoch 103/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 116175744.0000 - val_loss: 236340528.0000
Epoch 104/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 101196416.0000 - val_loss: 236423408.0000
Epoch 105/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109453680.0000 - val_loss: 232884688.0000
Epoch 106/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 103521960.0000 - val_loss: 237412288.0000
Epoch 107/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 131807584.0000 - val_loss: 236340880.0000
Epoch 108/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 121190352.0000 - val_loss: 230327040.0000
Epoch 109/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97070616.0000 - val_loss: 234247040.0000
Epoch 110/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 124851200.0000 - val_loss: 236663376.0000
Epoch 111/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 95863720.0000 - val_loss: 236942576.0000
Epoch 112/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 97023448.0000 - val_loss: 244032704.0000
Epoch 113/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98524104.0000 - val_loss: 235293760.0000
Epoch 114/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111705712.0000 - val_loss: 233193936.0000
Epoch 115/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112553584.0000 - val_loss: 231377440.0000
Epoch 116/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112365064.0000 - val_loss: 235667392.0000
Epoch 117/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105685320.0000 - val_loss: 237016560.0000
Epoch 118/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112699816.0000 - val_loss: 235173472.0000
Epoch 119/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109411208.0000 - val_loss: 243119136.0000
Epoch 120/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117269600.0000 - val_loss: 225481296.0000
Epoch 121/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 102007616.0000 - val_loss: 228520656.0000
Epoch 122/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110756024.0000 - val_loss: 231411520.0000
Epoch 123/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105304656.0000 - val_loss: 233709664.0000
Epoch 124/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 103648216.0000 - val_loss: 230854384.0000
Epoch 125/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 100331864.0000 - val_loss: 231286976.0000
Epoch 126/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97672872.0000 - val_loss: 230848464.0000
Epoch 127/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 119344200.0000 - val_loss: 230852640.0000
Epoch 128/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 119720120.0000 - val_loss: 230411104.0000
Epoch 129/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 98167112.0000 - val_loss: 231284928.0000
Epoch 130/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94381432.0000 - val_loss: 234753184.0000
Epoch 131/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 91991440.0000 - val_loss: 228978912.0000
Epoch 132/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105739984.0000 - val_loss: 230167728.0000
Epoch 133/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105460184.0000 - val_loss: 234062672.0000
Epoch 134/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 136026128.0000 - val_loss: 231683472.0000
Epoch 135/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111159568.0000 - val_loss: 233133872.0000
Epoch 136/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112437104.0000 - val_loss: 233399344.0000
Epoch 137/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117637248.0000 - val_loss: 231568640.0000
Epoch 138/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 86372408.0000 - val_loss: 235108240.0000
Epoch 139/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 102299416.0000 - val_loss: 232956144.0000
Epoch 140/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 90131024.0000 - val_loss: 236817552.0000
Epoch 141/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 120204208.0000 - val_loss: 229483392.0000
Epoch 142/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 90664216.0000 - val_loss: 235680704.0000
Epoch 143/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 103180336.0000 - val_loss: 233617472.0000
Epoch 144/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 90072408.0000 - val_loss: 236428592.0000
Epoch 145/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 90674672.0000 - val_loss: 234992144.0000
Epoch 146/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97271184.0000 - val_loss: 230515424.0000
Epoch 147/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93745000.0000 - val_loss: 234727280.0000
Epoch 148/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 120177232.0000 - val_loss: 235320752.0000
Epoch 149/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 117428680.0000 - val_loss: 233922448.0000
Epoch 150/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 103939240.0000 - val_loss: 233944352.0000
Epoch 151/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106324824.0000 - val_loss: 235042304.0000
Epoch 152/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94737704.0000 - val_loss: 239593264.0000
Epoch 153/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 106381576.0000 - val_loss: 235456368.0000
Epoch 154/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 80082008.0000 - val_loss: 241005568.0000
Epoch 155/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 93352808.0000 - val_loss: 233615376.0000
Epoch 156/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93587648.0000 - val_loss: 237694688.0000
Epoch 157/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 107422416.0000 - val_loss: 235673328.0000
Epoch 158/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 112581528.0000 - val_loss: 234179600.0000
Epoch 159/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97392784.0000 - val_loss: 239031456.0000
Epoch 160/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 92980952.0000 - val_loss: 240651120.0000
Epoch 161/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 99960448.0000 - val_loss: 238927888.0000
Epoch 162/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 95470784.0000 - val_loss: 242417472.0000
Epoch 163/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94501416.0000 - val_loss: 237623344.0000
Epoch 164/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 102502848.0000 - val_loss: 236697424.0000
Epoch 165/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 110693360.0000 - val_loss: 239807424.0000
Epoch 166/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105231512.0000 - val_loss: 230547312.0000
Epoch 167/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 111594336.0000 - val_loss: 236412704.0000
Epoch 168/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 130820360.0000 - val_loss: 234184640.0000
Epoch 169/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 118758224.0000 - val_loss: 231411520.0000
Epoch 170/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 88303864.0000 - val_loss: 236991776.0000
Epoch 171/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 105489232.0000 - val_loss: 241405104.0000
Epoch 172/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 95033040.0000 - val_loss: 236453552.0000
Epoch 173/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 87644912.0000 - val_loss: 238798240.0000
Epoch 174/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 95243488.0000 - val_loss: 240056848.0000
Epoch 175/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 100886808.0000 - val_loss: 239692224.0000
Epoch 176/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94638944.0000 - val_loss: 237016176.0000
Epoch 177/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 109162352.0000 - val_loss: 239338496.0000
Epoch 178/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 89026472.0000 - val_loss: 241266048.0000
Epoch 179/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93431368.0000 - val_loss: 236521840.0000
Epoch 180/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 118783360.0000 - val_loss: 239976752.0000
Epoch 181/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 87537160.0000 - val_loss: 241015120.0000
Epoch 182/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 103246912.0000 - val_loss: 242898448.0000
Epoch 183/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 91116424.0000 - val_loss: 240772176.0000
Epoch 184/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 100853272.0000 - val_loss: 239304224.0000
Epoch 185/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93462912.0000 - val_loss: 248069184.0000
Epoch 186/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 94963968.0000 - val_loss: 243062176.0000
Epoch 187/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 102653392.0000 - val_loss: 240025744.0000
Epoch 188/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 96207808.0000 - val_loss: 241901552.0000
Epoch 189/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93575928.0000 - val_loss: 245899504.0000
Epoch 190/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 85690992.0000 - val_loss: 243473904.0000
Epoch 191/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 99249304.0000 - val_loss: 243291200.0000
Epoch 192/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 123761584.0000 - val_loss: 239294304.0000
Epoch 193/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 88089928.0000 - val_loss: 241630000.0000
Epoch 194/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 107581312.0000 - val_loss: 241153264.0000
Epoch 195/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 92667536.0000 - val_loss: 238327200.0000
Epoch 196/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 80789456.0000 - val_loss: 243750944.0000
Epoch 197/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 93540808.0000 - val_loss: 244636736.0000
Epoch 198/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 97296072.0000 - val_loss: 239444112.0000
Epoch 199/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 92946128.0000 - val_loss: 248981520.0000
Epoch 200/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step - loss: 92100240.0000 - val_loss: 247323904.0000
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
Train MAE: 5369.6602923190085
Train MSE: 127768930.43523088
Train RMSE: 11303.491957586863
Train R²: 0.2900771729403083
Test MAE: 5588.393437154035
Test MSE: 103624997.46932775
Test RMSE: 10179.636411450447
Test R²: 0.4284388625448369
In [82]:
# Visualize training history
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training History')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
No description has been provided for this image
In [85]:
# Visualize predicted vs actual values
plt.figure(figsize=(10, 6))
plt.scatter(y_test, y_pred_test, alpha=0.3)
plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], '--r', linewidth=2)
plt.title('Neural Network: Predicted vs Actual Freight Cost')
plt.xlabel('Actual Freight Cost')
plt.ylabel('Predicted Freight Cost')
plt.xlim(-1000, 100000)  # Adjust the values as needed
plt.ylim(-1000, 100000)
plt.show()
No description has been provided for this image

Hyperparameter Tuning for predicting freight cost¶

In [98]:
# pip install keras-tuner
In [86]:
import keras_tuner as kt

# Encoding categorical variables
categorical_features = ['Country', 'Managed By', 'Fulfill Via', 'Vendor INCO Term', 'Shipment Mode', 'Manufacturing Site']
numerical_features = ['Line_Item_Quantity', 'Line_Item_Value', 'Pack_Price', 'Unit_Price', 'Weight_Kilograms']

# Defining the target and features
X = df[categorical_features + numerical_features]
y = df['Freight_Cost_USD']

# Ensure X is a DataFrame
X = pd.DataFrame(X)

# Splitting the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Creating preprocessing pipelines
numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())
])

categorical_transformer = Pipeline(steps=[
    ('encoder', OneHotEncoder(handle_unknown='ignore'))
])

preprocessor = ColumnTransformer(
    transformers=[
        ('num', numeric_transformer, numerical_features),
        ('cat', categorical_transformer, categorical_features)
    ])

# Preprocess the data
X_train = preprocessor.fit_transform(X_train)
X_test = preprocessor.transform(X_test)
In [88]:
import keras_tuner as kt
'''The Tuner class is used to perform the hyperparameter search.
Keras Tuner provides several tuners like RandomSearch, BayesianOptimization, Hyperband, and SklearnTuner.'''
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.optimizers import Adam

def build_model(hp):
    model = Sequential()
    model.add(Dense(hp.Int('units1', min_value=32, max_value=512, step=32), 
                    input_dim=X_train.shape[1], activation='relu'))
    model.add(Dropout(hp.Float('dropout1', min_value=0.0, max_value=0.5, step=0.1)))
    model.add(Dense(hp.Int('units2', min_value=32, max_value=512, step=32), activation='relu'))
    model.add(Dropout(hp.Float('dropout2', min_value=0.0, max_value=0.5, step=0.1)))
    model.add(Dense(hp.Int('units3', min_value=32, max_value=512, step=32), activation='relu'))
    model.add(Dense(1))

    model.compile(optimizer=Adam(hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])),
                  loss='mean_squared_error',
                  metrics=['mean_squared_error'])
    return model
In [91]:
tuner = kt.RandomSearch(
    build_model,
    objective='val_mean_squared_error',
    max_trials=10,
    executions_per_trial=1,
    directory='my_dir',
    project_name='freight_cost_prediction'
)
In [92]:
tuner.search(X_train, y_train, epochs=50, validation_split=0.2, verbose=1)

# Get the optimal hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]

print(f"""
The hyperparameter search is complete. The optimal number of units in the first densely connected layer is {best_hps.get('units1')},
the optimal number of units in the second densely connected layer is {best_hps.get('units2')},
the optimal number of units in the third densely connected layer is {best_hps.get('units3')},
the optimal dropout rate for the first dropout layer is {best_hps.get('dropout1')},
the optimal dropout rate for the second dropout layer is {best_hps.get('dropout2')},
and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}.
""")
Trial 10 Complete [00h 00m 55s]
val_mean_squared_error: 147057216.0

Best val_mean_squared_error So Far: 70048344.0
Total elapsed time: 00h 08m 34s

The hyperparameter search is complete. The optimal number of units in the first densely connected layer is 480,
the optimal number of units in the second densely connected layer is 160,
the optimal number of units in the third densely connected layer is 32,
the optimal dropout rate for the first dropout layer is 0.1,
the optimal dropout rate for the second dropout layer is 0.0,
and the optimal learning rate for the optimizer is 0.01.

In [93]:
'''Trial 10 Complete [00h 01m 27s]
val_mean_squared_error: 47706828.0

Best val_mean_squared_error So Far: 47706828.0
Total elapsed time: 00h 12m 36s

The hyperparameter search is complete. The optimal number of units in the first densely connected layer is 384,
the optimal number of units in the second densely connected layer is 512,
the optimal number of units in the third densely connected layer is 128,
the optimal dropout rate for the first dropout layer is 0.30000000000000004,
the optimal dropout rate for the second dropout layer is 0.0,
and the optimal learning rate for the optimizer is 0.01.'''

# Build the model with the optimal hyperparameters and train it
model = tuner.hypermodel.build(best_hps)
# Summary of the model
model.summary()

history = model.fit(X_train, y_train, epochs=200, validation_split=0.2, verbose=1)

# Evaluate the model
y_pred_train = model.predict(X_train)
y_pred_test = model.predict(X_test)

# Calculate performance metrics
train_mae = mean_absolute_error(y_train, y_pred_train)
train_mse = mean_squared_error(y_train, y_pred_train)
train_rmse = np.sqrt(train_mse)
train_r2 = r2_score(y_train, y_pred_train)

test_mae = mean_absolute_error(y_test, y_pred_test)
test_mse = mean_squared_error(y_test, y_pred_test)
test_rmse = np.sqrt(test_mse)
test_r2 = r2_score(y_test, y_pred_test)

print(f'Train MAE: {train_mae}')
print(f'Train MSE: {train_mse}')
print(f'Train RMSE: {train_rmse}')
print(f'Train R²: {train_r2}')

print(f'Test MAE: {test_mae}')
print(f'Test MSE: {test_mse}')
print(f'Test RMSE: {test_rmse}')
print(f'Test R²: {test_r2}')
Model: "sequential_1"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ dense_4 (Dense)                      │ (None, 480)                 │          73,440 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_2 (Dropout)                  │ (None, 480)                 │               0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_5 (Dense)                      │ (None, 160)                 │          76,960 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_3 (Dropout)                  │ (None, 160)                 │               0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_6 (Dense)                      │ (None, 32)                  │           5,152 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_7 (Dense)                      │ (None, 1)                   │              33 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 155,585 (607.75 KB)
 Trainable params: 155,585 (607.75 KB)
 Non-trainable params: 0 (0.00 B)
Epoch 1/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 7ms/step - loss: 202288960.0000 - mean_squared_error: 202288960.0000 - val_loss: 189164592.0000 - val_mean_squared_error: 189164592.0000
Epoch 2/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 112726784.0000 - mean_squared_error: 112726784.0000 - val_loss: 200698224.0000 - val_mean_squared_error: 200698224.0000
Epoch 3/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 110034920.0000 - mean_squared_error: 110034920.0000 - val_loss: 157179136.0000 - val_mean_squared_error: 157179136.0000
Epoch 4/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 123434880.0000 - mean_squared_error: 123434880.0000 - val_loss: 184877488.0000 - val_mean_squared_error: 184877488.0000
Epoch 5/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 93715792.0000 - mean_squared_error: 93715792.0000 - val_loss: 144451088.0000 - val_mean_squared_error: 144451088.0000
Epoch 6/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 95753832.0000 - mean_squared_error: 95753832.0000 - val_loss: 144343216.0000 - val_mean_squared_error: 144343216.0000
Epoch 7/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 90445776.0000 - mean_squared_error: 90445776.0000 - val_loss: 97315896.0000 - val_mean_squared_error: 97315896.0000
Epoch 8/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 90354040.0000 - mean_squared_error: 90354040.0000 - val_loss: 87205288.0000 - val_mean_squared_error: 87205288.0000
Epoch 9/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 84168504.0000 - mean_squared_error: 84168504.0000 - val_loss: 128948256.0000 - val_mean_squared_error: 128948256.0000
Epoch 10/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 79510312.0000 - mean_squared_error: 79510312.0000 - val_loss: 105363808.0000 - val_mean_squared_error: 105363808.0000
Epoch 11/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 72733040.0000 - mean_squared_error: 72733040.0000 - val_loss: 96322744.0000 - val_mean_squared_error: 96322744.0000
Epoch 12/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 76767952.0000 - mean_squared_error: 76767952.0000 - val_loss: 85434336.0000 - val_mean_squared_error: 85434336.0000
Epoch 13/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 61704708.0000 - mean_squared_error: 61704708.0000 - val_loss: 73245504.0000 - val_mean_squared_error: 73245504.0000
Epoch 14/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 59817932.0000 - mean_squared_error: 59817932.0000 - val_loss: 85349840.0000 - val_mean_squared_error: 85349840.0000
Epoch 15/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 62371528.0000 - mean_squared_error: 62371528.0000 - val_loss: 71877568.0000 - val_mean_squared_error: 71877568.0000
Epoch 16/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 55388168.0000 - mean_squared_error: 55388168.0000 - val_loss: 77865448.0000 - val_mean_squared_error: 77865448.0000
Epoch 17/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 53992888.0000 - mean_squared_error: 53992888.0000 - val_loss: 74148600.0000 - val_mean_squared_error: 74148600.0000
Epoch 18/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 56219312.0000 - mean_squared_error: 56219312.0000 - val_loss: 75647384.0000 - val_mean_squared_error: 75647384.0000
Epoch 19/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 56852384.0000 - mean_squared_error: 56852384.0000 - val_loss: 67190192.0000 - val_mean_squared_error: 67190192.0000
Epoch 20/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 56650492.0000 - mean_squared_error: 56650492.0000 - val_loss: 69539400.0000 - val_mean_squared_error: 69539400.0000
Epoch 21/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 55175260.0000 - mean_squared_error: 55175260.0000 - val_loss: 74110848.0000 - val_mean_squared_error: 74110848.0000
Epoch 22/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 52258864.0000 - mean_squared_error: 52258864.0000 - val_loss: 81445488.0000 - val_mean_squared_error: 81445488.0000
Epoch 23/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 50021424.0000 - mean_squared_error: 50021424.0000 - val_loss: 72621640.0000 - val_mean_squared_error: 72621640.0000
Epoch 24/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 45272496.0000 - mean_squared_error: 45272496.0000 - val_loss: 74791424.0000 - val_mean_squared_error: 74791424.0000
Epoch 25/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 49131588.0000 - mean_squared_error: 49131588.0000 - val_loss: 77153280.0000 - val_mean_squared_error: 77153280.0000
Epoch 26/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 72193816.0000 - mean_squared_error: 72193816.0000 - val_loss: 78429512.0000 - val_mean_squared_error: 78429512.0000
Epoch 27/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 44748848.0000 - mean_squared_error: 44748848.0000 - val_loss: 79487416.0000 - val_mean_squared_error: 79487416.0000
Epoch 28/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 47527064.0000 - mean_squared_error: 47527064.0000 - val_loss: 70604856.0000 - val_mean_squared_error: 70604856.0000
Epoch 29/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 43272588.0000 - mean_squared_error: 43272588.0000 - val_loss: 71927112.0000 - val_mean_squared_error: 71927112.0000
Epoch 30/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 45051352.0000 - mean_squared_error: 45051352.0000 - val_loss: 74263856.0000 - val_mean_squared_error: 74263856.0000
Epoch 31/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 45155228.0000 - mean_squared_error: 45155228.0000 - val_loss: 89075632.0000 - val_mean_squared_error: 89075632.0000
Epoch 32/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 49165756.0000 - mean_squared_error: 49165756.0000 - val_loss: 109519920.0000 - val_mean_squared_error: 109519920.0000
Epoch 33/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 50681404.0000 - mean_squared_error: 50681404.0000 - val_loss: 81753584.0000 - val_mean_squared_error: 81753584.0000
Epoch 34/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 44040828.0000 - mean_squared_error: 44040828.0000 - val_loss: 75421952.0000 - val_mean_squared_error: 75421952.0000
Epoch 35/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 42358288.0000 - mean_squared_error: 42358288.0000 - val_loss: 75225368.0000 - val_mean_squared_error: 75225368.0000
Epoch 36/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 44297700.0000 - mean_squared_error: 44297700.0000 - val_loss: 80600328.0000 - val_mean_squared_error: 80600328.0000
Epoch 37/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 38659144.0000 - mean_squared_error: 38659144.0000 - val_loss: 76718520.0000 - val_mean_squared_error: 76718520.0000
Epoch 38/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 40996952.0000 - mean_squared_error: 40996952.0000 - val_loss: 76987296.0000 - val_mean_squared_error: 76987296.0000
Epoch 39/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 49901340.0000 - mean_squared_error: 49901340.0000 - val_loss: 86846984.0000 - val_mean_squared_error: 86846984.0000
Epoch 40/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 42812192.0000 - mean_squared_error: 42812192.0000 - val_loss: 78087952.0000 - val_mean_squared_error: 78087952.0000
Epoch 41/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 50496164.0000 - mean_squared_error: 50496164.0000 - val_loss: 76902376.0000 - val_mean_squared_error: 76902376.0000
Epoch 42/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 38235112.0000 - mean_squared_error: 38235112.0000 - val_loss: 76346952.0000 - val_mean_squared_error: 76346952.0000
Epoch 43/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 36127572.0000 - mean_squared_error: 36127572.0000 - val_loss: 74956952.0000 - val_mean_squared_error: 74956952.0000
Epoch 44/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 43986012.0000 - mean_squared_error: 43986012.0000 - val_loss: 88084680.0000 - val_mean_squared_error: 88084680.0000
Epoch 45/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 35116996.0000 - mean_squared_error: 35116996.0000 - val_loss: 76201616.0000 - val_mean_squared_error: 76201616.0000
Epoch 46/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 40082500.0000 - mean_squared_error: 40082500.0000 - val_loss: 77275928.0000 - val_mean_squared_error: 77275928.0000
Epoch 47/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 39852392.0000 - mean_squared_error: 39852392.0000 - val_loss: 77553592.0000 - val_mean_squared_error: 77553592.0000
Epoch 48/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 41808384.0000 - mean_squared_error: 41808384.0000 - val_loss: 81851056.0000 - val_mean_squared_error: 81851056.0000
Epoch 49/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 45347984.0000 - mean_squared_error: 45347984.0000 - val_loss: 100606568.0000 - val_mean_squared_error: 100606568.0000
Epoch 50/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 35593536.0000 - mean_squared_error: 35593536.0000 - val_loss: 75341568.0000 - val_mean_squared_error: 75341568.0000
Epoch 51/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 42962048.0000 - mean_squared_error: 42962048.0000 - val_loss: 80482408.0000 - val_mean_squared_error: 80482408.0000
Epoch 52/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 35014672.0000 - mean_squared_error: 35014672.0000 - val_loss: 77041152.0000 - val_mean_squared_error: 77041152.0000
Epoch 53/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 34835896.0000 - mean_squared_error: 34835896.0000 - val_loss: 99289520.0000 - val_mean_squared_error: 99289520.0000
Epoch 54/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 40256976.0000 - mean_squared_error: 40256976.0000 - val_loss: 75246456.0000 - val_mean_squared_error: 75246456.0000
Epoch 55/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 36828600.0000 - mean_squared_error: 36828600.0000 - val_loss: 75478592.0000 - val_mean_squared_error: 75478592.0000
Epoch 56/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 36415644.0000 - mean_squared_error: 36415644.0000 - val_loss: 76097616.0000 - val_mean_squared_error: 76097616.0000
Epoch 57/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 38169204.0000 - mean_squared_error: 38169204.0000 - val_loss: 77700816.0000 - val_mean_squared_error: 77700816.0000
Epoch 58/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 36818836.0000 - mean_squared_error: 36818836.0000 - val_loss: 87806160.0000 - val_mean_squared_error: 87806160.0000
Epoch 59/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 37931872.0000 - mean_squared_error: 37931872.0000 - val_loss: 74217272.0000 - val_mean_squared_error: 74217272.0000
Epoch 60/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 37579816.0000 - mean_squared_error: 37579816.0000 - val_loss: 75316288.0000 - val_mean_squared_error: 75316288.0000
Epoch 61/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 33228666.0000 - mean_squared_error: 33228666.0000 - val_loss: 75831040.0000 - val_mean_squared_error: 75831040.0000
Epoch 62/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 35387648.0000 - mean_squared_error: 35387648.0000 - val_loss: 77118400.0000 - val_mean_squared_error: 77118400.0000
Epoch 63/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 32098334.0000 - mean_squared_error: 32098334.0000 - val_loss: 83110368.0000 - val_mean_squared_error: 83110368.0000
Epoch 64/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 39284708.0000 - mean_squared_error: 39284708.0000 - val_loss: 85751400.0000 - val_mean_squared_error: 85751400.0000
Epoch 65/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 44284044.0000 - mean_squared_error: 44284044.0000 - val_loss: 74936336.0000 - val_mean_squared_error: 74936336.0000
Epoch 66/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 38745308.0000 - mean_squared_error: 38745308.0000 - val_loss: 81279000.0000 - val_mean_squared_error: 81279000.0000
Epoch 67/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 27478438.0000 - mean_squared_error: 27478438.0000 - val_loss: 75401424.0000 - val_mean_squared_error: 75401424.0000
Epoch 68/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 38632124.0000 - mean_squared_error: 38632124.0000 - val_loss: 78507200.0000 - val_mean_squared_error: 78507200.0000
Epoch 69/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 35890640.0000 - mean_squared_error: 35890640.0000 - val_loss: 101580560.0000 - val_mean_squared_error: 101580560.0000
Epoch 70/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 32332438.0000 - mean_squared_error: 32332438.0000 - val_loss: 77930048.0000 - val_mean_squared_error: 77930048.0000
Epoch 71/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 39297312.0000 - mean_squared_error: 39297312.0000 - val_loss: 77934728.0000 - val_mean_squared_error: 77934728.0000
Epoch 72/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 32610698.0000 - mean_squared_error: 32610698.0000 - val_loss: 77599496.0000 - val_mean_squared_error: 77599496.0000
Epoch 73/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 31947664.0000 - mean_squared_error: 31947664.0000 - val_loss: 80411504.0000 - val_mean_squared_error: 80411504.0000
Epoch 74/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 34267256.0000 - mean_squared_error: 34267256.0000 - val_loss: 82096816.0000 - val_mean_squared_error: 82096816.0000
Epoch 75/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 35275280.0000 - mean_squared_error: 35275280.0000 - val_loss: 101106816.0000 - val_mean_squared_error: 101106808.0000
Epoch 76/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 32644134.0000 - mean_squared_error: 32644134.0000 - val_loss: 100204840.0000 - val_mean_squared_error: 100204840.0000
Epoch 77/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 29789682.0000 - mean_squared_error: 29789682.0000 - val_loss: 77586096.0000 - val_mean_squared_error: 77586096.0000
Epoch 78/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 28682304.0000 - mean_squared_error: 28682304.0000 - val_loss: 83449992.0000 - val_mean_squared_error: 83449992.0000
Epoch 79/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 33227386.0000 - mean_squared_error: 33227386.0000 - val_loss: 102076760.0000 - val_mean_squared_error: 102076760.0000
Epoch 80/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 32122214.0000 - mean_squared_error: 32122214.0000 - val_loss: 87675264.0000 - val_mean_squared_error: 87675264.0000
Epoch 81/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 40000416.0000 - mean_squared_error: 40000416.0000 - val_loss: 84759464.0000 - val_mean_squared_error: 84759464.0000
Epoch 82/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 28705328.0000 - mean_squared_error: 28705328.0000 - val_loss: 116120200.0000 - val_mean_squared_error: 116120200.0000
Epoch 83/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 26053756.0000 - mean_squared_error: 26053756.0000 - val_loss: 79440568.0000 - val_mean_squared_error: 79440568.0000
Epoch 84/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 29147432.0000 - mean_squared_error: 29147432.0000 - val_loss: 78131320.0000 - val_mean_squared_error: 78131320.0000
Epoch 85/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 41604512.0000 - mean_squared_error: 41604512.0000 - val_loss: 79188080.0000 - val_mean_squared_error: 79188080.0000
Epoch 86/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 28563300.0000 - mean_squared_error: 28563300.0000 - val_loss: 78902640.0000 - val_mean_squared_error: 78902640.0000
Epoch 87/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 29005700.0000 - mean_squared_error: 29005700.0000 - val_loss: 80962120.0000 - val_mean_squared_error: 80962120.0000
Epoch 88/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 28750588.0000 - mean_squared_error: 28750588.0000 - val_loss: 81937760.0000 - val_mean_squared_error: 81937760.0000
Epoch 89/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 33081254.0000 - mean_squared_error: 33081254.0000 - val_loss: 85452856.0000 - val_mean_squared_error: 85452856.0000
Epoch 90/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 31342552.0000 - mean_squared_error: 31342552.0000 - val_loss: 84070040.0000 - val_mean_squared_error: 84070040.0000
Epoch 91/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 34315120.0000 - mean_squared_error: 34315120.0000 - val_loss: 86855840.0000 - val_mean_squared_error: 86855840.0000
Epoch 92/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 30845780.0000 - mean_squared_error: 30845780.0000 - val_loss: 95905136.0000 - val_mean_squared_error: 95905136.0000
Epoch 93/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 31691700.0000 - mean_squared_error: 31691700.0000 - val_loss: 86269320.0000 - val_mean_squared_error: 86269320.0000
Epoch 94/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 30325544.0000 - mean_squared_error: 30325544.0000 - val_loss: 90048400.0000 - val_mean_squared_error: 90048400.0000
Epoch 95/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 34305068.0000 - mean_squared_error: 34305068.0000 - val_loss: 90910632.0000 - val_mean_squared_error: 90910632.0000
Epoch 96/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 31047190.0000 - mean_squared_error: 31047190.0000 - val_loss: 95354096.0000 - val_mean_squared_error: 95354096.0000
Epoch 97/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 29759750.0000 - mean_squared_error: 29759750.0000 - val_loss: 81281944.0000 - val_mean_squared_error: 81281944.0000
Epoch 98/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 40721804.0000 - mean_squared_error: 40721804.0000 - val_loss: 79659864.0000 - val_mean_squared_error: 79659864.0000
Epoch 99/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 28969450.0000 - mean_squared_error: 28969450.0000 - val_loss: 78733656.0000 - val_mean_squared_error: 78733656.0000
Epoch 100/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 29722584.0000 - mean_squared_error: 29722584.0000 - val_loss: 78392032.0000 - val_mean_squared_error: 78392032.0000
Epoch 101/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 40558192.0000 - mean_squared_error: 40558192.0000 - val_loss: 77861200.0000 - val_mean_squared_error: 77861200.0000
Epoch 102/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 35416408.0000 - mean_squared_error: 35416408.0000 - val_loss: 78543928.0000 - val_mean_squared_error: 78543928.0000
Epoch 103/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 26695838.0000 - mean_squared_error: 26695838.0000 - val_loss: 84454736.0000 - val_mean_squared_error: 84454736.0000
Epoch 104/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 32756784.0000 - mean_squared_error: 32756784.0000 - val_loss: 80269096.0000 - val_mean_squared_error: 80269096.0000
Epoch 105/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 28060838.0000 - mean_squared_error: 28060838.0000 - val_loss: 80814768.0000 - val_mean_squared_error: 80814768.0000
Epoch 106/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 25980400.0000 - mean_squared_error: 25980400.0000 - val_loss: 80504992.0000 - val_mean_squared_error: 80504992.0000
Epoch 107/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31959324.0000 - mean_squared_error: 31959324.0000 - val_loss: 79407888.0000 - val_mean_squared_error: 79407888.0000
Epoch 108/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 35273708.0000 - mean_squared_error: 35273708.0000 - val_loss: 77881192.0000 - val_mean_squared_error: 77881192.0000
Epoch 109/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26444010.0000 - mean_squared_error: 26444010.0000 - val_loss: 79436768.0000 - val_mean_squared_error: 79436768.0000
Epoch 110/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 27235572.0000 - mean_squared_error: 27235572.0000 - val_loss: 80056448.0000 - val_mean_squared_error: 80056448.0000
Epoch 111/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 28651506.0000 - mean_squared_error: 28651506.0000 - val_loss: 76726448.0000 - val_mean_squared_error: 76726448.0000
Epoch 112/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26641254.0000 - mean_squared_error: 26641254.0000 - val_loss: 79402096.0000 - val_mean_squared_error: 79402096.0000
Epoch 113/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 29496170.0000 - mean_squared_error: 29496170.0000 - val_loss: 78783328.0000 - val_mean_squared_error: 78783328.0000
Epoch 114/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31323442.0000 - mean_squared_error: 31323442.0000 - val_loss: 77353776.0000 - val_mean_squared_error: 77353776.0000
Epoch 115/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 32398604.0000 - mean_squared_error: 32398604.0000 - val_loss: 82057520.0000 - val_mean_squared_error: 82057520.0000
Epoch 116/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 27121104.0000 - mean_squared_error: 27121104.0000 - val_loss: 80269376.0000 - val_mean_squared_error: 80269376.0000
Epoch 117/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 28944198.0000 - mean_squared_error: 28944198.0000 - val_loss: 77415824.0000 - val_mean_squared_error: 77415824.0000
Epoch 118/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 24683304.0000 - mean_squared_error: 24683304.0000 - val_loss: 77211680.0000 - val_mean_squared_error: 77211680.0000
Epoch 119/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 29242176.0000 - mean_squared_error: 29242176.0000 - val_loss: 83377168.0000 - val_mean_squared_error: 83377168.0000
Epoch 120/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 23908586.0000 - mean_squared_error: 23908586.0000 - val_loss: 79812888.0000 - val_mean_squared_error: 79812888.0000
Epoch 121/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31323176.0000 - mean_squared_error: 31323176.0000 - val_loss: 77386384.0000 - val_mean_squared_error: 77386384.0000
Epoch 122/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 31711636.0000 - mean_squared_error: 31711636.0000 - val_loss: 81761752.0000 - val_mean_squared_error: 81761752.0000
Epoch 123/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 24392178.0000 - mean_squared_error: 24392178.0000 - val_loss: 78839552.0000 - val_mean_squared_error: 78839552.0000
Epoch 124/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 25578456.0000 - mean_squared_error: 25578456.0000 - val_loss: 91996176.0000 - val_mean_squared_error: 91996176.0000
Epoch 125/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 32051754.0000 - mean_squared_error: 32051754.0000 - val_loss: 79049504.0000 - val_mean_squared_error: 79049504.0000
Epoch 126/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 28293786.0000 - mean_squared_error: 28293786.0000 - val_loss: 86090672.0000 - val_mean_squared_error: 86090672.0000
Epoch 127/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26398510.0000 - mean_squared_error: 26398510.0000 - val_loss: 80987360.0000 - val_mean_squared_error: 80987360.0000
Epoch 128/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 26371376.0000 - mean_squared_error: 26371376.0000 - val_loss: 80454048.0000 - val_mean_squared_error: 80454048.0000
Epoch 129/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 32256586.0000 - mean_squared_error: 32256586.0000 - val_loss: 80790320.0000 - val_mean_squared_error: 80790320.0000
Epoch 130/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 28196606.0000 - mean_squared_error: 28196606.0000 - val_loss: 76077336.0000 - val_mean_squared_error: 76077336.0000
Epoch 131/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 33438602.0000 - mean_squared_error: 33438602.0000 - val_loss: 87295704.0000 - val_mean_squared_error: 87295704.0000
Epoch 132/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 26243910.0000 - mean_squared_error: 26243910.0000 - val_loss: 79400696.0000 - val_mean_squared_error: 79400696.0000
Epoch 133/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 25568564.0000 - mean_squared_error: 25568564.0000 - val_loss: 82976400.0000 - val_mean_squared_error: 82976400.0000
Epoch 134/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 29179886.0000 - mean_squared_error: 29179886.0000 - val_loss: 84038624.0000 - val_mean_squared_error: 84038624.0000
Epoch 135/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 28692776.0000 - mean_squared_error: 28692776.0000 - val_loss: 85719840.0000 - val_mean_squared_error: 85719840.0000
Epoch 136/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 25243162.0000 - mean_squared_error: 25243162.0000 - val_loss: 87022872.0000 - val_mean_squared_error: 87022872.0000
Epoch 137/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31427134.0000 - mean_squared_error: 31427134.0000 - val_loss: 82851208.0000 - val_mean_squared_error: 82851208.0000
Epoch 138/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 27183674.0000 - mean_squared_error: 27183674.0000 - val_loss: 82316256.0000 - val_mean_squared_error: 82316256.0000
Epoch 139/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 26838520.0000 - mean_squared_error: 26838520.0000 - val_loss: 80446624.0000 - val_mean_squared_error: 80446624.0000
Epoch 140/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26554074.0000 - mean_squared_error: 26554074.0000 - val_loss: 80614368.0000 - val_mean_squared_error: 80614376.0000
Epoch 141/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 22738920.0000 - mean_squared_error: 22738920.0000 - val_loss: 79297496.0000 - val_mean_squared_error: 79297496.0000
Epoch 142/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 29976430.0000 - mean_squared_error: 29976430.0000 - val_loss: 79466216.0000 - val_mean_squared_error: 79466216.0000
Epoch 143/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 25288860.0000 - mean_squared_error: 25288860.0000 - val_loss: 81613280.0000 - val_mean_squared_error: 81613280.0000
Epoch 144/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 23147528.0000 - mean_squared_error: 23147528.0000 - val_loss: 82203928.0000 - val_mean_squared_error: 82203928.0000
Epoch 145/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 28458054.0000 - mean_squared_error: 28458054.0000 - val_loss: 77255000.0000 - val_mean_squared_error: 77255000.0000
Epoch 146/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 24669776.0000 - mean_squared_error: 24669776.0000 - val_loss: 87841248.0000 - val_mean_squared_error: 87841248.0000
Epoch 147/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 27747026.0000 - mean_squared_error: 27747026.0000 - val_loss: 77592048.0000 - val_mean_squared_error: 77592048.0000
Epoch 148/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 32082286.0000 - mean_squared_error: 32082286.0000 - val_loss: 77497472.0000 - val_mean_squared_error: 77497472.0000
Epoch 149/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 30873142.0000 - mean_squared_error: 30873142.0000 - val_loss: 81522752.0000 - val_mean_squared_error: 81522752.0000
Epoch 150/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 27793972.0000 - mean_squared_error: 27793972.0000 - val_loss: 80845160.0000 - val_mean_squared_error: 80845160.0000
Epoch 151/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 24791810.0000 - mean_squared_error: 24791810.0000 - val_loss: 78711896.0000 - val_mean_squared_error: 78711896.0000
Epoch 152/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 21111938.0000 - mean_squared_error: 21111938.0000 - val_loss: 84839120.0000 - val_mean_squared_error: 84839120.0000
Epoch 153/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26767636.0000 - mean_squared_error: 26767636.0000 - val_loss: 80516680.0000 - val_mean_squared_error: 80516680.0000
Epoch 154/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31437064.0000 - mean_squared_error: 31437064.0000 - val_loss: 81400576.0000 - val_mean_squared_error: 81400576.0000
Epoch 155/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 23324464.0000 - mean_squared_error: 23324464.0000 - val_loss: 78651824.0000 - val_mean_squared_error: 78651824.0000
Epoch 156/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 30054876.0000 - mean_squared_error: 30054876.0000 - val_loss: 82310800.0000 - val_mean_squared_error: 82310800.0000
Epoch 157/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 22087406.0000 - mean_squared_error: 22087406.0000 - val_loss: 84890568.0000 - val_mean_squared_error: 84890568.0000
Epoch 158/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 25235442.0000 - mean_squared_error: 25235442.0000 - val_loss: 86729576.0000 - val_mean_squared_error: 86729576.0000
Epoch 159/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 23945976.0000 - mean_squared_error: 23945976.0000 - val_loss: 85603136.0000 - val_mean_squared_error: 85603136.0000
Epoch 160/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 32760546.0000 - mean_squared_error: 32760546.0000 - val_loss: 82771360.0000 - val_mean_squared_error: 82771360.0000
Epoch 161/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 24720994.0000 - mean_squared_error: 24720994.0000 - val_loss: 87123336.0000 - val_mean_squared_error: 87123336.0000
Epoch 162/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 31845718.0000 - mean_squared_error: 31845718.0000 - val_loss: 84663752.0000 - val_mean_squared_error: 84663752.0000
Epoch 163/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 23777554.0000 - mean_squared_error: 23777554.0000 - val_loss: 82235848.0000 - val_mean_squared_error: 82235848.0000
Epoch 164/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26001728.0000 - mean_squared_error: 26001728.0000 - val_loss: 80807824.0000 - val_mean_squared_error: 80807824.0000
Epoch 165/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 29041698.0000 - mean_squared_error: 29041698.0000 - val_loss: 79101992.0000 - val_mean_squared_error: 79101992.0000
Epoch 166/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 19084656.0000 - mean_squared_error: 19084656.0000 - val_loss: 82201032.0000 - val_mean_squared_error: 82201032.0000
Epoch 167/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26052196.0000 - mean_squared_error: 26052196.0000 - val_loss: 78994488.0000 - val_mean_squared_error: 78994488.0000
Epoch 168/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 23889588.0000 - mean_squared_error: 23889588.0000 - val_loss: 85592032.0000 - val_mean_squared_error: 85592032.0000
Epoch 169/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 21954626.0000 - mean_squared_error: 21954626.0000 - val_loss: 86419152.0000 - val_mean_squared_error: 86419152.0000
Epoch 170/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 29587348.0000 - mean_squared_error: 29587348.0000 - val_loss: 84972352.0000 - val_mean_squared_error: 84972352.0000
Epoch 171/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 27571592.0000 - mean_squared_error: 27571592.0000 - val_loss: 82348784.0000 - val_mean_squared_error: 82348784.0000
Epoch 172/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 22471348.0000 - mean_squared_error: 22471348.0000 - val_loss: 92585848.0000 - val_mean_squared_error: 92585848.0000
Epoch 173/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26266582.0000 - mean_squared_error: 26266582.0000 - val_loss: 83414808.0000 - val_mean_squared_error: 83414808.0000
Epoch 174/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 31198580.0000 - mean_squared_error: 31198580.0000 - val_loss: 84298744.0000 - val_mean_squared_error: 84298744.0000
Epoch 175/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26970606.0000 - mean_squared_error: 26970606.0000 - val_loss: 87600248.0000 - val_mean_squared_error: 87600248.0000
Epoch 176/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 24915488.0000 - mean_squared_error: 24915488.0000 - val_loss: 94628360.0000 - val_mean_squared_error: 94628360.0000
Epoch 177/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 20898918.0000 - mean_squared_error: 20898918.0000 - val_loss: 81442000.0000 - val_mean_squared_error: 81442000.0000
Epoch 178/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 21910344.0000 - mean_squared_error: 21910344.0000 - val_loss: 82935576.0000 - val_mean_squared_error: 82935576.0000
Epoch 179/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26899140.0000 - mean_squared_error: 26899140.0000 - val_loss: 81202720.0000 - val_mean_squared_error: 81202720.0000
Epoch 180/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 32054868.0000 - mean_squared_error: 32054868.0000 - val_loss: 82430072.0000 - val_mean_squared_error: 82430072.0000
Epoch 181/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 24038540.0000 - mean_squared_error: 24038540.0000 - val_loss: 79691664.0000 - val_mean_squared_error: 79691664.0000
Epoch 182/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 24449658.0000 - mean_squared_error: 24449658.0000 - val_loss: 82030632.0000 - val_mean_squared_error: 82030632.0000
Epoch 183/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 20950294.0000 - mean_squared_error: 20950294.0000 - val_loss: 83315328.0000 - val_mean_squared_error: 83315328.0000
Epoch 184/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 22855752.0000 - mean_squared_error: 22855752.0000 - val_loss: 87792872.0000 - val_mean_squared_error: 87792872.0000
Epoch 185/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 20908908.0000 - mean_squared_error: 20908908.0000 - val_loss: 83518192.0000 - val_mean_squared_error: 83518192.0000
Epoch 186/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 21832706.0000 - mean_squared_error: 21832706.0000 - val_loss: 87243472.0000 - val_mean_squared_error: 87243472.0000
Epoch 187/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26305236.0000 - mean_squared_error: 26305236.0000 - val_loss: 83303480.0000 - val_mean_squared_error: 83303480.0000
Epoch 188/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 25051056.0000 - mean_squared_error: 25051056.0000 - val_loss: 81974752.0000 - val_mean_squared_error: 81974752.0000
Epoch 189/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 24276824.0000 - mean_squared_error: 24276824.0000 - val_loss: 86396560.0000 - val_mean_squared_error: 86396560.0000
Epoch 190/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 26883102.0000 - mean_squared_error: 26883102.0000 - val_loss: 92878880.0000 - val_mean_squared_error: 92878880.0000
Epoch 191/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 25810710.0000 - mean_squared_error: 25810710.0000 - val_loss: 87862360.0000 - val_mean_squared_error: 87862360.0000
Epoch 192/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 25044232.0000 - mean_squared_error: 25044232.0000 - val_loss: 86666920.0000 - val_mean_squared_error: 86666920.0000
Epoch 193/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 19775356.0000 - mean_squared_error: 19775356.0000 - val_loss: 83514704.0000 - val_mean_squared_error: 83514704.0000
Epoch 194/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 25031968.0000 - mean_squared_error: 25031968.0000 - val_loss: 82988632.0000 - val_mean_squared_error: 82988632.0000
Epoch 195/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 24132226.0000 - mean_squared_error: 24132226.0000 - val_loss: 91642112.0000 - val_mean_squared_error: 91642112.0000
Epoch 196/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 27966424.0000 - mean_squared_error: 27966424.0000 - val_loss: 83523840.0000 - val_mean_squared_error: 83523840.0000
Epoch 197/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 24224508.0000 - mean_squared_error: 24224508.0000 - val_loss: 89031352.0000 - val_mean_squared_error: 89031352.0000
Epoch 198/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 27604414.0000 - mean_squared_error: 27604414.0000 - val_loss: 91340320.0000 - val_mean_squared_error: 91340320.0000
Epoch 199/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 32703616.0000 - mean_squared_error: 32703616.0000 - val_loss: 87543200.0000 - val_mean_squared_error: 87543200.0000
Epoch 200/200
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 30208852.0000 - mean_squared_error: 30208852.0000 - val_loss: 88831944.0000 - val_mean_squared_error: 88831944.0000
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 0s 3ms/step
Train MAE: 2859.367389072331
Train MSE: 34128019.18143482
Train RMSE: 5841.919135133147
Train R²: 0.8103744018463591
Test MAE: 4355.35290974529
Test MSE: 76225853.05001558
Test RMSE: 8730.74183847029
Test R²: 0.5795634611653163
In [94]:
# Visualize training history
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training History')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
No description has been provided for this image
In [96]:
# Visualize predicted vs actual values
plt.figure(figsize=(10, 6))
plt.scatter(y_test, y_pred_test, alpha=0.3)
plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], '--r', linewidth=2)
plt.xlim(-1000, 100000)  # Adjust the values as needed
plt.ylim(-1000, 100000)
plt.title('Neural Network: Predicted vs Actual Freight Cost')
plt.xlabel('Actual Freight Cost')
plt.ylabel('Predicted Freight Cost')
plt.show()
No description has been provided for this image

Demand Forecaast¶

In [317]:
# Parse dates
df['Delivered to Client Date'] = pd.to_datetime(df['Delivered to Client Date'], errors='coerce')

# Aggregate data by month, summing only numeric columns
df['Month'] = df['Delivered to Client Date'].dt.to_period('M')
df_monthly = df.groupby('Month')['Line_Item_Quantity'].sum().reset_index()

# Convert 'Month' back to datetime for forecasting
df_monthly['Month'] = df_monthly['Month'].dt.to_timestamp()

# Rename columns for demand forecasting
df_demand = df_monthly.rename(columns={'Month': 'ds', 'Line_Item_Quantity': 'y'})

# Display the prepared data
df_demand
Out[317]:
ds y
0 2006-05-01 75
1 2006-06-01 166
2 2006-07-01 50506
3 2006-08-01 94019
4 2006-09-01 85948
... ... ...
108 2015-05-01 3347388
109 2015-06-01 2866818
110 2015-07-01 1302480
111 2015-08-01 2263591
112 2015-09-01 171323

113 rows × 2 columns

Prophet Forecast (Monthly Demand Trend Forecast)¶

In [ ]:
'''Prophet is a statistical forecasting model developed by Facebook, often used for time series forecasting. 
While it incorporates some elements commonly found in machine learning models,
it is fundamentally a sophisticated statistical model rather than a traditional AI model.'''
In [318]:
from prophet import Prophet

# Initialize the Prophet model
model = Prophet()

# Fit the model on the prepared data
model.fit(df_demand)

# Create a dataframe to hold predictions (e.g., for the next 12 months)
future = model.make_future_dataframe(periods=12, freq='M')

# Predict future demand
forecast = model.predict(future)

# Plot the forecast
fig = model.plot(forecast)
plt.title('Demand Forecast')
plt.xlabel('Date')
plt.ylabel('Demand')
plt.show()

# Plot forecast components
fig2 = model.plot_components(forecast)
plt.show()

# Display forecast data
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail(36)
15:31:42 - cmdstanpy - INFO - Chain [1] start processing
15:31:43 - cmdstanpy - INFO - Chain [1] done processing
No description has been provided for this image
No description has been provided for this image
Out[318]:
ds yhat yhat_lower yhat_upper
89 2013-10-01 2.693639e+06 1.611433e+06 3.747777e+06
90 2013-11-01 2.592505e+06 1.469659e+06 3.632955e+06
91 2013-12-01 2.393133e+06 1.287300e+06 3.414292e+06
92 2014-01-01 2.071352e+06 9.979076e+05 3.088390e+06
93 2014-02-01 2.819082e+06 1.703839e+06 3.920130e+06
94 2014-03-01 2.921321e+06 1.897951e+06 4.034164e+06
95 2014-04-01 2.560360e+06 1.524385e+06 3.678608e+06
96 2014-05-01 2.713315e+06 1.684490e+06 3.800261e+06
97 2014-06-01 2.881543e+06 1.825151e+06 4.036635e+06
98 2014-07-01 2.594401e+06 1.562289e+06 3.647724e+06
99 2014-08-01 2.465705e+06 1.394603e+06 3.426849e+06
100 2014-09-01 2.842719e+06 1.787701e+06 4.015639e+06
101 2014-10-01 2.968516e+06 1.945193e+06 4.009993e+06
102 2014-11-01 2.884127e+06 1.822330e+06 3.909051e+06
103 2014-12-01 2.677526e+06 1.611766e+06 3.760641e+06
104 2015-01-01 2.358685e+06 1.389603e+06 3.459260e+06
105 2015-02-01 3.093848e+06 2.041893e+06 4.193376e+06
106 2015-03-01 3.221112e+06 2.109235e+06 4.327279e+06
107 2015-04-01 2.827134e+06 1.824528e+06 3.888745e+06
108 2015-05-01 3.018487e+06 1.989991e+06 4.043886e+06
109 2015-06-01 3.178599e+06 2.030141e+06 4.208313e+06
110 2015-07-01 2.855240e+06 1.647221e+06 3.844097e+06
111 2015-08-01 2.746233e+06 1.589752e+06 3.822924e+06
112 2015-09-01 3.152370e+06 2.097671e+06 4.207598e+06
113 2015-09-30 3.193744e+06 2.049906e+06 4.286957e+06
114 2015-10-31 3.196877e+06 2.139823e+06 4.306769e+06
115 2015-11-30 2.953591e+06 1.921551e+06 4.051949e+06
116 2015-12-31 2.652003e+06 1.529724e+06 3.642313e+06
117 2016-01-31 3.322598e+06 2.170960e+06 4.425645e+06
118 2016-02-29 3.520731e+06 2.367172e+06 4.598809e+06
119 2016-03-31 3.094133e+06 2.098146e+06 4.197932e+06
120 2016-04-30 3.323874e+06 2.254230e+06 4.435847e+06
121 2016-05-31 3.474998e+06 2.399117e+06 4.555280e+06
122 2016-06-30 3.115984e+06 2.025165e+06 4.131757e+06
123 2016-07-31 3.027487e+06 2.008762e+06 4.135693e+06
124 2016-08-31 3.462026e+06 2.407556e+06 4.540937e+06
In [319]:
# Evaluate the model's performance using historical data
# Split the data into training and testing sets
train = df_demand.iloc[:-24]
test = df_demand.iloc[:]

# Fit the model on the training set
model_train = Prophet()
model_train.fit(train)

# Create a dataframe to hold predictions for the test period
future_test = model_train.make_future_dataframe(periods=24, freq='M')
forecast_test = model_train.predict(future_test)

# Plot the forecast against actual values
fig = model_train.plot(forecast_test)
plt.plot(test['ds'], test['y'], 'r-', label='Actual')
plt.legend()
plt.title('Demand Forecast vs Actual')
plt.xlabel('Date')
plt.ylabel('Demand')
plt.show()
15:31:52 - cmdstanpy - INFO - Chain [1] start processing
15:31:52 - cmdstanpy - INFO - Chain [1] done processing
No description has been provided for this image
In [320]:
# Calculate forecast error metrics
from sklearn.metrics import mean_absolute_error, mean_squared_error

y_true = test['y'].values
y_pred = forecast_test['yhat'].iloc[:].values
mae = mean_absolute_error(y_true, y_pred)
mse = mean_squared_error(y_true, y_pred)
rmse = np.sqrt(mse)
R2 = r2_score(y_true, y_pred)

print(f'Mean Absolute Error: {mae}')
print(f'Mean Squared Error: {mse}')
print(f'Root Mean Squared Error: {rmse}')
print(f'Test R²: {test_r2}')
Mean Absolute Error: 635812.8265504248
Mean Squared Error: 1075813506318.2765
Root Mean Squared Error: 1037214.3010575377
Test R²: -0.14663060484905222

LSTM for Demand Forecast (Monthly Demand Trend Forecast, time_step = 3)¶

In [350]:
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense

# Ensure no missing values
#df_demand = df_demand.dropna()

# Set date as index
#df_demand.set_index('ds', inplace=True)

# Normalize the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(df_demand)

# Prepare the dataset for LSTM
#The create_dataset function prepares the data for the LSTM model by converting the time series data into sequences suitable for training
def create_dataset(dataset, time_step=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - time_step - 1):
        a = dataset[i:(i + time_step), 0]
        dataX.append(a)
        dataY.append(dataset[i + time_step, 0])
    return np.array(dataX), np.array(dataY)

time_step = 3
X, y = create_dataset(scaled_data, time_step)

# Split into training and testing datasets
train_size = int(len(X) * 0.8)
test_size = len(X) - train_size
X_train, X_test = X[0:train_size], X[train_size:len(X)]
y_train, y_test = y[0:train_size], y[train_size:len(y)]

# Reshape input to be [samples, time steps, features] which is required for LSTM
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)

# Create the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))
model.add(LSTM(50, return_sequences=False))
#model.add(Dense(50))
model.add(Dense(25))
model.add(Dense(1))

model.summary()

# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model with validation data and capture the history
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size=1, epochs=40)

# Make predictions
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)

# Inverse transform to get actual values
train_predict = scaler.inverse_transform(train_predict)
test_predict = scaler.inverse_transform(test_predict)
y_train = scaler.inverse_transform([y_train])
y_test = scaler.inverse_transform([y_test])

# Calculate MAE
train_mae = np.mean(((abs(train_predict - y_train[0]))))
test_mae = np.mean(((abs(test_predict - y_test[0]))))


# Calculate RMSE
train_rmse = np.sqrt(np.mean(((train_predict - y_train[0]) ** 2)))
test_rmse = np.sqrt(np.mean(((test_predict - y_test[0]) ** 2)))

# Calculate R² Score
train_r2 = r2_score(y_train[0], train_predict)
test_r2 = r2_score(y_test[0], test_predict)

print(f'Train MSE: {train_mae}')
print(f'Test MSE: {test_mae}')
print(f'Train RMSE: {train_rmse}')
print(f'Test RMSE: {test_rmse}')
print(f'Train R²: {train_r2}')
print(f'Test R²: {test_r2}')
Model: "sequential_32"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ lstm_34 (LSTM)                       │ (None, 3, 50)               │          10,400 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ lstm_35 (LSTM)                       │ (None, 50)                  │          20,200 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_100 (Dense)                    │ (None, 25)                  │           1,275 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_101 (Dense)                    │ (None, 1)                   │              26 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 31,901 (124.61 KB)
 Trainable params: 31,901 (124.61 KB)
 Non-trainable params: 0 (0.00 B)
Epoch 1/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 6s 13ms/step - loss: 0.0434 - val_loss: 0.0440
Epoch 2/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0164 - val_loss: 0.0573
Epoch 3/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0362 - val_loss: 0.0426
Epoch 4/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0112 - val_loss: 0.0376
Epoch 5/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0276 - val_loss: 0.0383
Epoch 6/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0143 - val_loss: 0.0358
Epoch 7/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0200 - val_loss: 0.0360
Epoch 8/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0133 - val_loss: 0.0640
Epoch 9/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0256 - val_loss: 0.0369
Epoch 10/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0124 - val_loss: 0.0368
Epoch 11/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0175 - val_loss: 0.0355
Epoch 12/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0279 - val_loss: 0.0353
Epoch 13/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0178 - val_loss: 0.0401
Epoch 14/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0111 - val_loss: 0.0335
Epoch 15/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0168 - val_loss: 0.0374
Epoch 16/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0102 - val_loss: 0.0373
Epoch 17/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0106 - val_loss: 0.0342
Epoch 18/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0158 - val_loss: 0.0382
Epoch 19/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0140 - val_loss: 0.0366
Epoch 20/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0115 - val_loss: 0.0387
Epoch 21/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0129 - val_loss: 0.0376
Epoch 22/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0113 - val_loss: 0.0358
Epoch 23/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0194 - val_loss: 0.0350
Epoch 24/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 0.0165 - val_loss: 0.0321
Epoch 25/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0183 - val_loss: 0.0351
Epoch 26/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0147 - val_loss: 0.0349
Epoch 27/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0146 - val_loss: 0.0344
Epoch 28/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0189 - val_loss: 0.0335
Epoch 29/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0111 - val_loss: 0.0328
Epoch 30/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0111 - val_loss: 0.0312
Epoch 31/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0102 - val_loss: 0.0312
Epoch 32/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0104 - val_loss: 0.0312
Epoch 33/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0220 - val_loss: 0.0341
Epoch 34/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0176 - val_loss: 0.0306
Epoch 35/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0220 - val_loss: 0.0367
Epoch 36/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0118 - val_loss: 0.0314
Epoch 37/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0109 - val_loss: 0.0312
Epoch 38/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0274 - val_loss: 0.0353
Epoch 39/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0310 - val_loss: 0.0364
Epoch 40/40
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0090 - val_loss: 0.0301
3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 222ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 51ms/step
Train MSE: 1109765.3797665644
Test MSE: 817630.3233471074
Train RMSE: 1374990.8842950538
Test RMSE: 1011043.0789603657
Train R²: 0.5016868158557218
Test R²: -0.12464702141193729
In [351]:
# Plot training and validation loss over epochs
plt.figure(figsize=(10, 6))
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Train vs Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
No description has been provided for this image
In [352]:
# Plot the results
plt.figure(figsize=(14, 8))
plt.plot(df_demand.index, scaler.inverse_transform(scaled_data), label='Actual Data')

train_plot = np.empty_like(scaled_data)
train_plot[:, :] = np.nan
train_plot[time_step:len(train_predict) + time_step, :] = train_predict
plt.plot(df_demand.index, train_plot, label='Train Predict')

# Shift test predictions for plotting
test_plot = np.empty_like(scaled_data)
test_plot[:, :] = np.nan
#test_plot[len(train_predict) + (time_step * 2) + 1:len(scaled_data) - 1, :] = test_predict
test_plot[len(train_predict) + (time_step * 1) :len(scaled_data) - 1, :] = test_predict
plt.plot(df_demand.index[len(train_predict) + (time_step * 1) :len(scaled_data) - 1], test_predict, label='Test Predict')

plt.xlabel('Date')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image

LSTM for Demand Forecast (time_step = 6)¶

In [354]:
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense

# Ensure no missing values
#df_demand = df_demand.dropna()

# Set date as index
#df_demand.set_index('ds', inplace=True)

# Normalize the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(df_demand)

# Prepare the dataset for LSTM
#The create_dataset function prepares the data for the LSTM model by converting the time series data into sequences suitable for training
def create_dataset(dataset, time_step=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - time_step - 1):
        a = dataset[i:(i + time_step), 0]
        dataX.append(a)
        dataY.append(dataset[i + time_step, 0])
    return np.array(dataX), np.array(dataY)

time_step = 6
X, y = create_dataset(scaled_data, time_step)

# Split into training and testing datasets
train_size = int(len(X) * 0.8)
test_size = len(X) - train_size
X_train, X_test = X[0:train_size], X[train_size:len(X)]
y_train, y_test = y[0:train_size], y[train_size:len(y)]

# Reshape input to be [samples, time steps, features] which is required for LSTM
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)

# Create the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))
model.add(LSTM(50, return_sequences=False))
#model.add(Dense(50))
model.add(Dense(25))
model.add(Dense(1))

# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model
model.fit(X_train, y_train, batch_size=1, epochs=40)

# Make predictions
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)

# Inverse transform to get actual values
train_predict = scaler.inverse_transform(train_predict)
test_predict = scaler.inverse_transform(test_predict)
y_train = scaler.inverse_transform([y_train])
y_test = scaler.inverse_transform([y_test])

# Calculate MAE
train_mae = np.mean(((abs(train_predict - y_train[0]))))
test_mae = np.mean(((abs(test_predict - y_test[0]))))


# Calculate RMSE
train_rmse = np.sqrt(np.mean(((train_predict - y_train[0]) ** 2)))
test_rmse = np.sqrt(np.mean(((test_predict - y_test[0]) ** 2)))

# Calculate R² Score
train_r2 = r2_score(y_train[0], train_predict)
test_r2 = r2_score(y_test[0], test_predict)

print(f'Train MSE: {train_mae}')
print(f'Test MSE: {test_mae}')
print(f'Train RMSE: {train_rmse}')
print(f'Test RMSE: {test_rmse}')
print(f'Train R²: {train_r2}')
print(f'Test R²: {test_r2}')
Epoch 1/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 5s 6ms/step - loss: 0.0303
Epoch 2/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0192
Epoch 3/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0123
Epoch 4/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0192
Epoch 5/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0173
Epoch 6/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0190
Epoch 7/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0181
Epoch 8/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0121
Epoch 9/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0150
Epoch 10/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0160
Epoch 11/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0344
Epoch 12/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0244
Epoch 13/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0245
Epoch 14/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0120
Epoch 15/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0170
Epoch 16/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0143
Epoch 17/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0152
Epoch 18/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0193
Epoch 19/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0143
Epoch 20/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0129
Epoch 21/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0122
Epoch 22/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0191
Epoch 23/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0144
Epoch 24/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0235
Epoch 25/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0157
Epoch 26/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0101
Epoch 27/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 0.0172
Epoch 28/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0120
Epoch 29/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0156
Epoch 30/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0199
Epoch 31/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0139
Epoch 32/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0141
Epoch 33/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0120
Epoch 34/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0110
Epoch 35/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0160
Epoch 36/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0283
Epoch 37/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0158
Epoch 38/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0138
Epoch 39/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 0.0155
Epoch 40/40
84/84 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0107
3/3 ━━━━━━━━━━━━━━━━━━━━ 1s 258ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step
Train MSE: 1119082.5146683673
Test MSE: 819752.0516528926
Train RMSE: 1386478.5890054419
Test RMSE: 1020070.6607100137
Train R²: 0.46992783915618863
Test R²: -0.21330238619647268
In [355]:
# Plot the results
plt.figure(figsize=(14, 8))
plt.plot(df_demand.index, scaler.inverse_transform(scaled_data), label='Actual Data')

train_plot = np.empty_like(scaled_data)
train_plot[:, :] = np.nan
train_plot[time_step:len(train_predict) + time_step, :] = train_predict
plt.plot(df_demand.index, train_plot, label='Train Predict')

# Shift test predictions for plotting
test_plot = np.empty_like(scaled_data)
test_plot[:, :] = np.nan
#test_plot[len(train_predict) + (time_step * 2) + 1:len(scaled_data) - 1, :] = test_predict
test_plot[len(train_predict) + (time_step * 1) :len(scaled_data) - 1, :] = test_predict
plt.plot(df_demand.index[len(train_predict) + (time_step * 1) :len(scaled_data) - 1], test_predict, label='Test Predict')

plt.xlabel('Date')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image

Demand Forecast (df_no_outlier)¶

In [88]:
# Parse dates
df_no_outlier['Delivered to Client Date'] = pd.to_datetime(df_no_outlier['Delivered to Client Date'], errors='coerce')

# Aggregate data by month, summing only numeric columns
df_no_outlier['Month'] = df_no_outlier['Delivered to Client Date'].dt.to_period('M')
df_monthly_no_outlier = df_no_outlier.groupby('Month')['Line_Item_Quantity'].sum().reset_index()

# Convert 'Month' back to datetime for forecasting
df_monthly_no_outlier['Month'] = df_monthly_no_outlier['Month'].dt.to_timestamp()

# Rename columns for demand forecasting
df_demand_no_outlier = df_monthly_no_outlier.rename(columns={'Month': 'ds', 'Line_Item_Quantity': 'y'})

# Display the prepared data
df_demand_no_outlier
Out[88]:
ds y
0 2006-05-01 75
1 2006-06-01 166
2 2006-07-01 506
3 2006-08-01 94019
4 2006-09-01 85948
... ... ...
108 2015-05-01 94316
109 2015-06-01 454873
110 2015-07-01 188511
111 2015-08-01 42811
112 2015-09-01 147750

113 rows × 2 columns

In [89]:
from prophet import Prophet
import matplotlib.pyplot as plt

# Initialize the Prophet model
model = Prophet()

# Fit the model on the prepared data
model.fit(df_demand_no_outlier)

# Create a dataframe to hold predictions (e.g., for the next 12 months)
future = model.make_future_dataframe(periods=12, freq='M')

# Predict future demand
forecast = model.predict(future)

# Plot the forecast
fig = model.plot(forecast)
plt.title('Demand Forecast')
plt.xlabel('Date')
plt.ylabel('Demand')
plt.show()

# Plot forecast components
fig2 = model.plot_components(forecast)
plt.show()

# Display forecast data
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail(36)
12:11:42 - cmdstanpy - INFO - Chain [1] start processing
12:11:42 - cmdstanpy - INFO - Chain [1] done processing
No description has been provided for this image
No description has been provided for this image
Out[89]:
ds yhat yhat_lower yhat_upper
89 2013-10-01 678068.376372 58419.920767 1.256596e+06
90 2013-11-01 673812.979101 50676.272776 1.266460e+06
91 2013-12-01 663750.402077 87310.833701 1.206199e+06
92 2014-01-01 291348.811597 -308698.627638 8.606739e+05
93 2014-02-01 572531.894310 -57276.654272 1.103609e+06
94 2014-03-01 538677.109378 -20091.307577 1.138092e+06
95 2014-04-01 435557.423832 -171886.534177 1.041780e+06
96 2014-05-01 211020.654463 -411778.245493 7.557337e+05
97 2014-06-01 411738.963894 -118088.631081 9.690266e+05
98 2014-07-01 552456.567499 -43187.354776 1.116845e+06
99 2014-08-01 449127.479812 -143872.218068 1.035166e+06
100 2014-09-01 802886.052220 205892.427798 1.386288e+06
101 2014-10-01 524999.744867 -29977.072870 1.107920e+06
102 2014-11-01 586755.519076 -34496.884511 1.186836e+06
103 2014-12-01 531516.329883 -85730.035622 1.088780e+06
104 2015-01-01 171175.441991 -399668.608472 7.821156e+05
105 2015-02-01 405847.067904 -166387.466817 1.001830e+06
106 2015-03-01 411008.734163 -170487.588915 9.588232e+05
107 2015-04-01 340850.090293 -250957.797485 9.610114e+05
108 2015-05-01 95592.194173 -511134.127350 6.522677e+05
109 2015-06-01 290320.395800 -334281.990486 8.752845e+05
110 2015-07-01 399550.583293 -188118.706480 9.785622e+05
111 2015-08-01 287069.171272 -281115.944883 8.865712e+05
112 2015-09-01 715947.414302 103222.907331 1.310926e+06
113 2015-09-30 272029.434692 -327457.878083 8.576103e+05
114 2015-10-31 660899.456447 85639.641966 1.256853e+06
115 2015-11-30 368203.781970 -204235.019778 9.335747e+05
116 2015-12-31 70108.545472 -514537.291775 6.995315e+05
117 2016-01-31 79201.622791 -521872.321835 6.904207e+05
118 2016-02-29 284379.872979 -347696.104886 8.481304e+05
119 2016-03-31 246492.268411 -375095.432598 8.289989e+05
120 2016-04-30 -20277.623666 -562195.477798 5.784226e+05
121 2016-05-31 168078.428731 -441196.732446 7.077943e+05
122 2016-06-30 246140.306735 -311141.412065 8.408498e+05
123 2016-07-31 125965.282816 -517358.079268 6.718102e+05
124 2016-08-31 629346.167454 38213.616588 1.207533e+06
In [90]:
# Evaluate the model's performance using historical data
# Split the data into training and testing sets
train = df_demand_no_outlier.iloc[:-24]
test = df_demand_no_outlier.iloc[:]

# Fit the model on the training set
model_train = Prophet()
model_train.fit(train)

# Create a dataframe to hold predictions for the test period
future_test = model_train.make_future_dataframe(periods=24, freq='M')
forecast_test = model_train.predict(future_test)

# Plot the forecast against actual values
fig = model_train.plot(forecast_test)
plt.plot(test['ds'], test['y'], 'r-', label='Actual')
plt.legend()
plt.title('Demand Forecast vs Actual')
plt.xlabel('Date')
plt.ylabel('Demand')
plt.show()
12:13:02 - cmdstanpy - INFO - Chain [1] start processing
12:13:02 - cmdstanpy - INFO - Chain [1] done processing
No description has been provided for this image
In [91]:
# Calculate forecast error metrics
from sklearn.metrics import mean_absolute_error, mean_squared_error
import numpy as np

y_true = test['y'].values
y_pred = forecast_test['yhat'].iloc[:].values
mae = mean_absolute_error(y_true, y_pred)
mse = mean_squared_error(y_true, y_pred)
rmse = np.sqrt(mse)

print(f'Mean Absolute Error: {mae}')
print(f'Mean Squared Error: {mse}')
print(f'Root Mean Squared Error: {rmse}')
Mean Absolute Error: 510624.080906199
Mean Squared Error: 516590110220.44446
Root Mean Squared Error: 718742.0331526775
In [92]:
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense

# Ensure no missing values
#df_demand = df_demand.dropna()

# Set date as index
df_demand_no_outlier.set_index('ds', inplace=True)

# Normalize the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(df_demand_no_outlier)

# Prepare the dataset for LSTM
#The create_dataset function prepares the data for the LSTM model by converting the time series data into sequences suitable for training
def create_dataset(dataset, time_step=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - time_step - 1):
        a = dataset[i:(i + time_step), 0]
        dataX.append(a)
        dataY.append(dataset[i + time_step, 0])
    return np.array(dataX), np.array(dataY)

time_step = 3
X, y = create_dataset(scaled_data, time_step)

# Split into training and testing datasets
train_size = int(len(X) * 0.8)
test_size = len(X) - train_size
X_train, X_test = X[0:train_size], X[train_size:len(X)]
y_train, y_test = y[0:train_size], y[train_size:len(y)]

# Reshape input to be [samples, time steps, features] which is required for LSTM
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)

# Create the LSTM model
model = Sequential()
model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))
model.add(LSTM(50, return_sequences=False))
#model.add(Dense(50))
model.add(Dense(25))
model.add(Dense(1))

# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')

# Train the model
model.fit(X_train, y_train, batch_size=1, epochs=20)

# Make predictions
train_predict = model.predict(X_train)
test_predict = model.predict(X_test)

# Inverse transform to get actual values
train_predict = scaler.inverse_transform(train_predict)
test_predict = scaler.inverse_transform(test_predict)
y_train = scaler.inverse_transform([y_train])
y_test = scaler.inverse_transform([y_test])

# Calculate RMSE
train_rmse = np.sqrt(np.mean(((train_predict - y_train[0]) ** 2)))
test_rmse = np.sqrt(np.mean(((test_predict - y_test[0]) ** 2)))

print(f'Train RMSE: {train_rmse}')
print(f'Test RMSE: {test_rmse}')
Epoch 1/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 8s 6ms/step - loss: 0.0285
Epoch 2/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0287
Epoch 3/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0186
Epoch 4/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 8ms/step - loss: 0.0357
Epoch 5/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0670
Epoch 6/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0312
Epoch 7/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0497
Epoch 8/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0197
Epoch 9/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0256
Epoch 10/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0568
Epoch 11/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0252
Epoch 12/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0208
Epoch 13/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0387
Epoch 14/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0265
Epoch 15/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0241
Epoch 16/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0213
Epoch 17/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0180
Epoch 18/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0214
Epoch 19/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0266
Epoch 20/20
87/87 ━━━━━━━━━━━━━━━━━━━━ 1s 7ms/step - loss: 0.0238
WARNING:tensorflow:6 out of the last 75 calls to <function TensorFlowTrainer.make_predict_function.<locals>.one_step_on_data_distributed at 0x000001842B6D9120> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
3/3 ━━━━━━━━━━━━━━━━━━━━ 2s 435ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 70ms/step
Train RMSE: 598288.2294173973
Test RMSE: 373374.04493172455
In [93]:
# Plot the results
plt.figure(figsize=(14, 8))
plt.plot(df_demand_no_outlier.index, scaler.inverse_transform(scaled_data), label='Actual Data')

train_plot = np.empty_like(scaled_data)
train_plot[:, :] = np.nan
train_plot[time_step:len(train_predict) + time_step, :] = train_predict
plt.plot(df_demand_no_outlier.index, train_plot, label='Train Predict')

# Shift test predictions for plotting
test_plot = np.empty_like(scaled_data)
test_plot[:, :] = np.nan
#test_plot[len(train_predict) + (time_step * 2) + 1:len(scaled_data) - 1, :] = test_predict
test_plot[len(train_predict) + (time_step * 1) :len(scaled_data) - 1, :] = test_predict
plt.plot(df_demand_no_outlier.index[len(train_predict) + (time_step * 1) :len(scaled_data) - 1], test_predict, label='Test Predict')

plt.xlabel('Date')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image
In [ ]:
 
In [251]:
csv_file_path = 'output.csv'
df.to_csv(csv_file_path, index=False)
In [ ]:
 

ConvLSTM Demand Forecast¶

In [114]:
from tensorflow.keras.layers import ConvLSTM2D, Dense, Flatten
from sklearn.preprocessing import LabelEncoder
In [386]:
data = df.copy()
In [387]:
# Encode categorical variables
categorical_columns = ['Project Code', 'Month','Country', 'Shipment Mode', 'Product Group', 'Manufacturing Site', 'Vendor', 'Item Description']
label_encoders = {col: LabelEncoder().fit(data[col]) for col in categorical_columns}
for col, encoder in label_encoders.items():
    data[col] = encoder.transform(data[col])
In [388]:
# Select relevant features for the ConvLSTM model
features = ['Project Code','Month','Country', 'Shipment Mode', 'Product Group', 'Line_Item_Quantity', 'Line_Item_Value', 
            'Manufacturing Site','Item Description','Weight_Kilograms','Freight_Cost_USD']

# Filter the dataset for the features
data_filtered = data[features].values
In [389]:
data[features]
Out[389]:
Project Code Month Country Shipment Mode Product Group Line_Item_Quantity Line_Item_Value Manufacturing Site Item Description Weight_Kilograms Freight_Cost_USD
0 3 1 9 0 3 19 551.00 76 98 13.0 780.340
1 65 6 40 0 2 1000 6200.00 11 143 358.0 4521.500
2 3 3 9 0 3 500 40000.00 5 70 171.0 1653.780
3 65 4 40 0 2 31920 127360.80 79 104 1855.0 16007.060
4 65 3 40 0 2 38000 121600.00 11 169 7590.0 45450.080
... ... ... ... ... ... ... ... ... ... ... ...
10319 51 110 42 3 2 166571 599655.60 66 112 11427.0 9968.972
10320 54 111 9 3 2 21072 137389.44 39 119 2833.2 14322.596
10321 71 111 41 3 2 514526 5140114.74 22 59 69849.2 61614.864
10322 115 111 42 3 2 17465 113871.80 65 119 1392.0 8641.690
10323 51 111 42 3 2 36639 72911.61 23 120 9588.6 16770.866

10324 rows × 11 columns

In [390]:
# Function to create sequences for NumPy arrays
def create_sequences(data, seq_length, forecast_horizon, target_index):
    X, y = [], []
    for i in range(len(data) - seq_length - forecast_horizon + 1):
        X.append(data[i:(i + seq_length), :])
        y.append(data[i + seq_length:i + seq_length + forecast_horizon, target_index])
    return np.array(X), np.array(y)

# Create sequences
sequence_length = 3
forecast_horizon = 1
target_index = features.index('Line_Item_Quantity')  # Assuming 'Line_Item_Quantity' is the fourth column in the data_filtered array
X, y = create_sequences(data_filtered, sequence_length, forecast_horizon, target_index)
In [391]:
# Split the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Reshape for ConvLSTM: [samples, timesteps, rows, columns, features]
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1, X_train.shape[2], 1))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1], 1, X_test.shape[2], 1))
In [392]:
# Define the ConvLSTM model
model = Sequential([
    ConvLSTM2D(filters=64, kernel_size=(1, 2), activation='relu', 
               input_shape=(sequence_length, 1, X_train.shape[3], 1)),
    Flatten(),
    Dense(50, activation='relu'),
    Dense(forecast_horizon)
])

print (model.summary())
Model: "sequential_38"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ conv_lstm2d_34 (ConvLSTM2D)          │ (None, 1, 10, 64)           │          33,536 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ flatten_16 (Flatten)                 │ (None, 640)                 │               0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_112 (Dense)                    │ (None, 50)                  │          32,050 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_113 (Dense)                    │ (None, 1)                   │              51 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 65,637 (256.39 KB)
 Trainable params: 65,637 (256.39 KB)
 Non-trainable params: 0 (0.00 B)
None
In [393]:
# Compile the model
model.compile(optimizer='adam', loss='mse')

# Train the model
history = model.fit(X_train, y_train, epochs=20, validation_split=0.2)
Epoch 1/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 7s 13ms/step - loss: 1838083968.0000 - val_loss: 1787631232.0000
Epoch 2/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 1722333696.0000 - val_loss: 1738135040.0000
Epoch 3/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1568133248.0000 - val_loss: 1731485696.0000
Epoch 4/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1730716544.0000 - val_loss: 1693364992.0000
Epoch 5/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1808896384.0000 - val_loss: 1653177856.0000
Epoch 6/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1717891200.0000 - val_loss: 1681337088.0000
Epoch 7/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1633101440.0000 - val_loss: 1697134720.0000
Epoch 8/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1572819840.0000 - val_loss: 1766612864.0000
Epoch 9/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1540838016.0000 - val_loss: 1608719872.0000
Epoch 10/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1432165888.0000 - val_loss: 1677716352.0000
Epoch 11/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 1482718592.0000 - val_loss: 1626177792.0000
Epoch 12/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1650900736.0000 - val_loss: 1625809408.0000
Epoch 13/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 1399844096.0000 - val_loss: 1631870848.0000
Epoch 14/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 10ms/step - loss: 1768038784.0000 - val_loss: 1635705600.0000
Epoch 15/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 12ms/step - loss: 1395410432.0000 - val_loss: 1657215232.0000
Epoch 16/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1529111808.0000 - val_loss: 1643238528.0000
Epoch 17/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1398584832.0000 - val_loss: 1614614400.0000
Epoch 18/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1637028096.0000 - val_loss: 1593476096.0000
Epoch 19/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1522195712.0000 - val_loss: 1603121664.0000
Epoch 20/20
207/207 ━━━━━━━━━━━━━━━━━━━━ 2s 11ms/step - loss: 1555126016.0000 - val_loss: 1580395520.0000
In [394]:
# Plotting training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Per Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss (MSE)')
plt.legend()
plt.show()
No description has been provided for this image
In [395]:
# Evaluate the model
loss = model.evaluate(X_test, y_test)
print(f'Test Loss: {loss}')

# Predict using the model
predictions = model.predict(X_test)
65/65 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 1503508992.0000
Test Loss: 1596585344.0
65/65 ━━━━━━━━━━━━━━━━━━━━ 1s 10ms/step
In [396]:
# Calculate performance metrics
mse = mean_squared_error(y_test, predictions)
mae = mean_absolute_error(y_test, predictions)
print(f"Mean Squared Error: {mse}")
print(f"Mean Absolute Error: {mae}")
# Calculate R-squared
r2 = r2_score(y_test, predictions)
print(f"R-squared: {r2}")
Mean Squared Error: 1596585081.0406408
Mean Absolute Error: 21484.403973639906
R-squared: 0.025051755843762957
In [397]:
# Plotting the first 100 predictions against the true values for better visibility
plt.figure(figsize=(20, 5))
plt.plot(y_test[:200], label='Actual')
plt.plot(predictions[:200], label='Predicted', alpha=0.7)
plt.title('Actual vs. Predicted Values')
plt.xlabel('Sample Index')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image

Bayesian Optimization for ConvLSTM¶

In [129]:
import tensorflow as tf
from skopt import gp_minimize
from skopt.space import Real, Integer
from skopt.utils import use_named_args

# Define the search space for your hyperparameters
search_space = [
    Integer(32, 128, name='units'),
    Real(0.001, 0.1, "log-uniform", name='learning_rate'),
    Integer(1, 4, name='num_conv_layers'),
    Integer(1, 3, name='num_lstm_layers'),
    Integer(1, 3, name='kernel_size')
]

@use_named_args(search_space)
def objective(**params):
    units = params['units']
    learning_rate = params['learning_rate']
    num_conv_layers = params['num_conv_layers']
    num_lstm_layers = params['num_lstm_layers']
    kernel_size = params['kernel_size']
    
    # Create model here with these parameters
    model = Sequential()
    
    for _ in range(num_conv_layers):
        model.add(ConvLSTM2D(filters=units, kernel_size=(kernel_size, kernel_size), padding='same', return_sequences=True))
    
    model.add(Flatten())
    
    for _ in range(num_lstm_layers):
        model.add(Dense(units))
    
    model.add(Dense(1))
    
    model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate), loss='mse')
    
    # Fit model on training data
    model.fit(X_train, y_train, epochs=5, verbose=0)
    
    # Predict on validation data
    predictions = model.predict(X_test)
    mse = mean_squared_error(y_test, predictions)
    
    return mse
In [130]:
result = gp_minimize(objective, search_space, n_calls=10, random_state=0)

print("Best parameters found:", result.x)
print("Lowest MSE found:", result.fun)
65/65 ━━━━━━━━━━━━━━━━━━━━ 7s 70ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 2s 20ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 4s 41ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 7s 76ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 5s 52ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 5s 52ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 3s 32ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 3s 32ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 2s 23ms/step
65/65 ━━━━━━━━━━━━━━━━━━━━ 2s 22ms/step
Best parameters found: [69, 0.003936128001463711, 1, 2, 2]
Lowest MSE found: 1550796765.8156254
In [131]:
result
Out[131]:
          fun: 1550796765.8156254
            x: [69, 0.003936128001463711, 1, 2, 2]
    func_vals: [ 1.638e+09  1.551e+09  1.614e+09  1.639e+09  1.649e+09
                 1.586e+09  1.599e+09  1.571e+09  2.557e+09  1.614e+09]
      x_iters: [[89, 0.048812550121497114, 4, 3, 2], [69, 0.003936128001463711, 1, 2, 2], [110, 0.009119149691664954, 2, 3, 2], [94, 0.005451086575834672, 4, 1, 3], [77, 0.039978040531536196, 3, 2, 2], [88, 0.011878085823766309, 3, 1, 2], [50, 0.029773943113019383, 2, 1, 2], [46, 0.00278383042173454, 2, 3, 2], [91, 0.06378185800348744, 1, 3, 2], [48, 0.005203605139233064, 3, 2, 2]]
       models: [GaussianProcessRegressor(kernel=1**2 * Matern(length_scale=[1, 1, 1, 1, 1], nu=2.5) + WhiteKernel(noise_level=1),
                                        n_restarts_optimizer=2, noise='gaussian',
                                        normalize_y=True, random_state=209652396)]
        space: Space([Integer(low=32, high=128, prior='uniform', transform='normalize'),
                      Real(low=0.001, high=0.1, prior='log-uniform', transform='normalize'),
                      Integer(low=1, high=4, prior='uniform', transform='normalize'),
                      Integer(low=1, high=3, prior='uniform', transform='normalize'),
                      Integer(low=1, high=3, prior='uniform', transform='normalize')])
 random_state: RandomState(MT19937)
        specs:     args:                    func: <function objective at 0x0000026120256700>
                                      dimensions: Space([Integer(low=32, high=128, prior='uniform', transform='normalize'),
                                                         Real(low=0.001, high=0.1, prior='log-uniform', transform='normalize'),
                                                         Integer(low=1, high=4, prior='uniform', transform='normalize'),
                                                         Integer(low=1, high=3, prior='uniform', transform='normalize'),
                                                         Integer(low=1, high=3, prior='uniform', transform='normalize')])
                                  base_estimator: GaussianProcessRegressor(kernel=1**2 * Matern(length_scale=[1, 1, 1, 1, 1], nu=2.5),
                                                                           n_restarts_optimizer=2, noise='gaussian',
                                                                           normalize_y=True, random_state=209652396)
                                         n_calls: 10
                                 n_random_starts: None
                                n_initial_points: 10
                         initial_point_generator: random
                                        acq_func: gp_hedge
                                   acq_optimizer: auto
                                              x0: None
                                              y0: None
                                    random_state: RandomState(MT19937)
                                         verbose: False
                                        callback: None
                                        n_points: 10000
                            n_restarts_optimizer: 5
                                              xi: 0.01
                                           kappa: 1.96
                                          n_jobs: 1
                                model_queue_size: None
                                space_constraint: None
               function: base_minimize

ConvLSTM with Best Parameter¶

In [132]:
# Best parameters found
units = 69
learning_rate = 0.003936128001463711
num_conv_layers = 1
num_lstm_layers = 2
kernel_size = 2

# Model Architecture
model = Sequential()

# Adding ConvLSTM layers based on the best parameters
for _ in range(num_conv_layers):
    model.add(ConvLSTM2D(filters=units, kernel_size=(kernel_size, kernel_size), padding='same', 
                         return_sequences=True, input_shape=(sequence_length, 1, X_train.shape[3], 1)))

# Flatten the output before feeding into Dense layers
model.add(Flatten())

# Adding Dense layers
for _ in range(num_lstm_layers):
    model.add(Dense(units, activation='relu'))

# Output layer
model.add(Dense(1))  # Assuming a single output (e.g., demand)

# Compile the model
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate), loss='mse')

# Summary of the model
model.summary()
Model: "sequential_16"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                         ┃ Output Shape                ┃         Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━┩
│ conv_lstm2d_30 (ConvLSTM2D)          │ (None, 3, 1, 9, 69)         │          77,556 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ flatten_12 (Flatten)                 │ (None, 1863)                │               0 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_49 (Dense)                     │ (None, 69)                  │         128,616 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_50 (Dense)                     │ (None, 69)                  │           4,830 │
├──────────────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_51 (Dense)                     │ (None, 1)                   │              70 │
└──────────────────────────────────────┴─────────────────────────────┴─────────────────┘
 Total params: 211,072 (824.50 KB)
 Trainable params: 211,072 (824.50 KB)
 Non-trainable params: 0 (0.00 B)
In [133]:
# Compile the model
model.compile(optimizer='adam', loss='mse')

# Train the model
history = model.fit(X_train, y_train, epochs=40, validation_split=0.2)
Epoch 1/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 6s 15ms/step - loss: 1755833216.0000 - val_loss: 1639805312.0000
Epoch 2/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1476088832.0000 - val_loss: 1628672128.0000
Epoch 3/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1561165568.0000 - val_loss: 1607112320.0000
Epoch 4/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1493041536.0000 - val_loss: 1584419968.0000
Epoch 5/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1753185792.0000 - val_loss: 1568176256.0000
Epoch 6/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1368463360.0000 - val_loss: 1559697024.0000
Epoch 7/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1450841984.0000 - val_loss: 1555893504.0000
Epoch 8/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1525058048.0000 - val_loss: 1559233920.0000
Epoch 9/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1654288640.0000 - val_loss: 1548690944.0000
Epoch 10/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1330932736.0000 - val_loss: 1543342976.0000
Epoch 11/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1551449600.0000 - val_loss: 1542533248.0000
Epoch 12/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1439761536.0000 - val_loss: 1541172224.0000
Epoch 13/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1453436288.0000 - val_loss: 1538061696.0000
Epoch 14/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 12ms/step - loss: 1521119360.0000 - val_loss: 1541178240.0000
Epoch 15/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1425362560.0000 - val_loss: 1550139648.0000
Epoch 16/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1308150400.0000 - val_loss: 1541357184.0000
Epoch 17/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1415698560.0000 - val_loss: 1536617728.0000
Epoch 18/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1571574528.0000 - val_loss: 1534726656.0000
Epoch 19/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1435119744.0000 - val_loss: 1531341568.0000
Epoch 20/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1363232000.0000 - val_loss: 1531554432.0000
Epoch 21/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1404277760.0000 - val_loss: 1532648192.0000
Epoch 22/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1690932864.0000 - val_loss: 1533354368.0000
Epoch 23/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1419534080.0000 - val_loss: 1530973184.0000
Epoch 24/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1372219008.0000 - val_loss: 1534238976.0000
Epoch 25/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1625455360.0000 - val_loss: 1542287616.0000
Epoch 26/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1408536960.0000 - val_loss: 1536188544.0000
Epoch 27/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1407330176.0000 - val_loss: 1535338880.0000
Epoch 28/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1603661824.0000 - val_loss: 1547499008.0000
Epoch 29/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1584394496.0000 - val_loss: 1534059264.0000
Epoch 30/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1346244864.0000 - val_loss: 1537340672.0000
Epoch 31/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1385773824.0000 - val_loss: 1529610112.0000
Epoch 32/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1404664320.0000 - val_loss: 1548761728.0000
Epoch 33/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1600288896.0000 - val_loss: 1534862464.0000
Epoch 34/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 14ms/step - loss: 1553540352.0000 - val_loss: 1537544576.0000
Epoch 35/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1468519680.0000 - val_loss: 1530009856.0000
Epoch 36/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1472282624.0000 - val_loss: 1541754880.0000
Epoch 37/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1362797568.0000 - val_loss: 1535063424.0000
Epoch 38/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1438396928.0000 - val_loss: 1534770816.0000
Epoch 39/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 13ms/step - loss: 1476463232.0000 - val_loss: 1530864128.0000
Epoch 40/40
207/207 ━━━━━━━━━━━━━━━━━━━━ 3s 12ms/step - loss: 1387155072.0000 - val_loss: 1539242752.0000
In [134]:
# Plotting training and validation loss
plt.figure(figsize=(10, 5))
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Training and Validation Loss Per Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss (MSE)')
plt.legend()
plt.show()
No description has been provided for this image
In [135]:
# Evaluate the model
loss = model.evaluate(X_test, y_test)
print(f'Test Loss: {loss}')

# Predict using the model
predictions = model.predict(X_test)
65/65 ━━━━━━━━━━━━━━━━━━━━ 0s 5ms/step - loss: 1463080704.0000
Test Loss: 1541437824.0
65/65 ━━━━━━━━━━━━━━━━━━━━ 1s 11ms/step
In [136]:
# Calculate performance metrics
mse = mean_squared_error(y_test, predictions)
mae = mean_absolute_error(y_test, predictions)
print(f"Mean Squared Error: {mse}")
print(f"Mean Absolute Error: {mae}")
# Calculate R-squared
r2 = r2_score(y_test, predictions)
print(f"R-squared: {r2}")
Mean Squared Error: 1541438023.2918985
Mean Absolute Error: 20515.274330357144
R-squared: 0.058727084368989324
In [137]:
# Plotting the first 100 predictions against the true values for better visibility
plt.figure(figsize=(20, 5))
plt.plot(y_test[:200], label='Actual')
plt.plot(predictions[:200], label='Predicted', alpha=0.7)
plt.title('Actual vs. Predicted Values')
plt.xlabel('Sample Index')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image

Self_Attention Demand Forecast¶

In [144]:
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, MultiHeadAttention, LayerNormalization, Dropout
In [231]:
data = []
In [232]:
data = df.copy()
In [234]:
# Encode categorical variables
categorical_columns = ['Project Code', 'Month','Country', 'Shipment Mode', 'Product Group', 'Manufacturing Site', 'Vendor', 'Item Description']
label_encoders = {col: LabelEncoder().fit(data[col]) for col in categorical_columns}
for col, encoder in label_encoders.items():
    data[col] = encoder.transform(data[col])
In [236]:
# Select relevant features for the ConvLSTM model
features = ['Project Code','Month','Country', 'Shipment Mode', 'Product Group', 'Line_Item_Quantity', 'Line_Item_Value', 
            'Manufacturing Site','Item Description','Weight_Kilograms','Freight_Cost_USD']

#features = ['Month', 'Line_Item_Quantity']

# Prepare the final dataset for modeling
features_data = data[features].copy()

# Split the data into features (X) and target (y)
X = features_data.drop('Line_Item_Quantity', axis=1)
y = features_data['Line_Item_Quantity']

# Ensure all features are numeric and free of NaNs
X = X.apply(pd.to_numeric, errors='coerce').fillna(0)

# Prepare the input data for the model
X_input = np.array(X)

# Reshape the input to match the expected input shape for the attention model
X_input_reshaped = X_input.reshape((X_input.shape[0], 1, X_input.shape[1]))
In [237]:
y.shape
Out[237]:
(10324,)
In [238]:
X_input_reshaped.shape
Out[238]:
(10324, 1, 10)
In [240]:
# Define a simple self-attention model
def build_self_attention_model(input_shape):
    inputs = Input(shape=input_shape)

    # Multi-head attention layer
    attention_output = MultiHeadAttention(num_heads=2, key_dim=2)(inputs, inputs)
    attention_output = LayerNormalization(epsilon=1e-6)(attention_output)

    # Feed-forward network
    ff_output = Dense(64, activation='relu')(attention_output)
    ff_output = Dropout(0.15)(ff_output)
    ff_output = Dense(32, activation='relu')(ff_output)

    # Final output layer
    outputs = Dense(1, activation='linear')(ff_output)

    model = Model(inputs, outputs)
    return model

# Build the model
model = build_self_attention_model(X_input_reshaped.shape[1:])
print (model.summary())


#changes before hyperoparameter tuning
#attention_output = LayerNormalization(epsilon=1e-6)(attention_output)
#ff_output = Dropout(0.1)(ff_output)
Model: "functional_47"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Layer (type)                  ┃ Output Shape              ┃         Param # ┃ Connected to               ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ input_layer_21 (InputLayer)   │ (None, 1, 10)             │               0 │ -                          │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ multi_head_attention_5        │ (None, 1, 10)             │             182 │ input_layer_21[0][0],      │
│ (MultiHeadAttention)          │                           │                 │ input_layer_21[0][0]       │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ layer_normalization_5         │ (None, 1, 10)             │              20 │ multi_head_attention_5[0]… │
│ (LayerNormalization)          │                           │                 │                            │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ dense_67 (Dense)              │ (None, 1, 64)             │             704 │ layer_normalization_5[0][… │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ dropout_15 (Dropout)          │ (None, 1, 64)             │               0 │ dense_67[0][0]             │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ dense_68 (Dense)              │ (None, 1, 32)             │           2,080 │ dropout_15[0][0]           │
├───────────────────────────────┼───────────────────────────┼─────────────────┼────────────────────────────┤
│ dense_69 (Dense)              │ (None, 1, 1)              │              33 │ dense_68[0][0]             │
└───────────────────────────────┴───────────────────────────┴─────────────────┴────────────────────────────┘
 Total params: 3,019 (11.79 KB)
 Trainable params: 3,019 (11.79 KB)
 Non-trainable params: 0 (0.00 B)
None
In [250]:
# Compile the model
model.compile(optimizer='adam', loss='mse')

# Train the model
history = model.fit(X_input_reshaped, y, epochs=300, batch_size=32, validation_split=0.2)

# Display the training process
print(history.history)
Epoch 1/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 5s 8ms/step - loss: 479116736.0000 - val_loss: 892461056.0000
Epoch 2/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 377197568.0000 - val_loss: 853235328.0000
Epoch 3/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 443696256.0000 - val_loss: 1099082240.0000
Epoch 4/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 446441248.0000 - val_loss: 934190080.0000
Epoch 5/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 411986976.0000 - val_loss: 844846080.0000
Epoch 6/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 483927584.0000 - val_loss: 1352435328.0000
Epoch 7/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 487498112.0000 - val_loss: 852462720.0000
Epoch 8/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 542319808.0000 - val_loss: 980103232.0000
Epoch 9/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 515966016.0000 - val_loss: 866708544.0000
Epoch 10/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 478717440.0000 - val_loss: 924335168.0000
Epoch 11/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 435410624.0000 - val_loss: 834286336.0000
Epoch 12/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 393204608.0000 - val_loss: 852768576.0000
Epoch 13/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 424357472.0000 - val_loss: 950470912.0000
Epoch 14/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 460826752.0000 - val_loss: 867789632.0000
Epoch 15/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 438510592.0000 - val_loss: 827841024.0000
Epoch 16/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 367837472.0000 - val_loss: 984758016.0000
Epoch 17/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 327120800.0000 - val_loss: 799164032.0000
Epoch 18/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 374640672.0000 - val_loss: 847518464.0000
Epoch 19/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 357601312.0000 - val_loss: 807851328.0000
Epoch 20/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 369174112.0000 - val_loss: 1258063488.0000
Epoch 21/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 405029088.0000 - val_loss: 1334589824.0000
Epoch 22/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 372826656.0000 - val_loss: 1272330880.0000
Epoch 23/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 455222656.0000 - val_loss: 877872512.0000
Epoch 24/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 410957376.0000 - val_loss: 805933632.0000
Epoch 25/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 366198592.0000 - val_loss: 863433472.0000
Epoch 26/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 439004992.0000 - val_loss: 808496128.0000
Epoch 27/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 396631360.0000 - val_loss: 1083551616.0000
Epoch 28/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 433126848.0000 - val_loss: 856750720.0000
Epoch 29/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 470325440.0000 - val_loss: 805061888.0000
Epoch 30/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 346955744.0000 - val_loss: 1300659584.0000
Epoch 31/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 370431488.0000 - val_loss: 775836672.0000
Epoch 32/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 376367904.0000 - val_loss: 803550208.0000
Epoch 33/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 461241440.0000 - val_loss: 818053952.0000
Epoch 34/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 420468352.0000 - val_loss: 792871872.0000
Epoch 35/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 423101216.0000 - val_loss: 773162496.0000
Epoch 36/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 438166240.0000 - val_loss: 770067456.0000
Epoch 37/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 467949920.0000 - val_loss: 792690688.0000
Epoch 38/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 354388992.0000 - val_loss: 808338304.0000
Epoch 39/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 492686400.0000 - val_loss: 851775488.0000
Epoch 40/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 401689280.0000 - val_loss: 967329344.0000
Epoch 41/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 394290272.0000 - val_loss: 829463168.0000
Epoch 42/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 362512576.0000 - val_loss: 852967488.0000
Epoch 43/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 397468608.0000 - val_loss: 908776896.0000
Epoch 44/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 512757280.0000 - val_loss: 832713664.0000
Epoch 45/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 362770016.0000 - val_loss: 810450816.0000
Epoch 46/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 403364064.0000 - val_loss: 835592320.0000
Epoch 47/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 398217312.0000 - val_loss: 1123609472.0000
Epoch 48/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 385524160.0000 - val_loss: 797550656.0000
Epoch 49/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 467580480.0000 - val_loss: 827139648.0000
Epoch 50/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 391479104.0000 - val_loss: 819812096.0000
Epoch 51/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 362043712.0000 - val_loss: 850016640.0000
Epoch 52/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 454400352.0000 - val_loss: 800847296.0000
Epoch 53/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 415808800.0000 - val_loss: 1118923904.0000
Epoch 54/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - loss: 370636064.0000 - val_loss: 751954752.0000
Epoch 55/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 487863264.0000 - val_loss: 956748416.0000
Epoch 56/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 360948576.0000 - val_loss: 788656448.0000
Epoch 57/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 338396576.0000 - val_loss: 903412416.0000
Epoch 58/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 347817344.0000 - val_loss: 935675584.0000
Epoch 59/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 399750688.0000 - val_loss: 742235328.0000
Epoch 60/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 452355616.0000 - val_loss: 809700224.0000
Epoch 61/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 448244288.0000 - val_loss: 827263040.0000
Epoch 62/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 399471552.0000 - val_loss: 830557248.0000
Epoch 63/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 432163232.0000 - val_loss: 878112000.0000
Epoch 64/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 336411136.0000 - val_loss: 839070912.0000
Epoch 65/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 364598400.0000 - val_loss: 748227008.0000
Epoch 66/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 374850240.0000 - val_loss: 744857472.0000
Epoch 67/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 329965344.0000 - val_loss: 747212416.0000
Epoch 68/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 357428576.0000 - val_loss: 755497280.0000
Epoch 69/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 365674240.0000 - val_loss: 1322627200.0000
Epoch 70/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 429804512.0000 - val_loss: 999164480.0000
Epoch 71/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 424081632.0000 - val_loss: 1008469696.0000
Epoch 72/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 354886432.0000 - val_loss: 802410624.0000
Epoch 73/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 449684192.0000 - val_loss: 1106876544.0000
Epoch 74/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 353784768.0000 - val_loss: 804427456.0000
Epoch 75/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 398642816.0000 - val_loss: 807398336.0000
Epoch 76/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 375494816.0000 - val_loss: 729133888.0000
Epoch 77/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 375379200.0000 - val_loss: 874364992.0000
Epoch 78/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 383811968.0000 - val_loss: 719350016.0000
Epoch 79/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 333216160.0000 - val_loss: 723441216.0000
Epoch 80/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 413230400.0000 - val_loss: 1144090368.0000
Epoch 81/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 518128256.0000 - val_loss: 721575296.0000
Epoch 82/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 421380352.0000 - val_loss: 746955008.0000
Epoch 83/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 390726784.0000 - val_loss: 745422656.0000
Epoch 84/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 341905472.0000 - val_loss: 871384960.0000
Epoch 85/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 358258912.0000 - val_loss: 823464896.0000
Epoch 86/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 359816064.0000 - val_loss: 966147008.0000
Epoch 87/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 392348864.0000 - val_loss: 1184788224.0000
Epoch 88/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 460637344.0000 - val_loss: 721684480.0000
Epoch 89/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 374454272.0000 - val_loss: 1016027008.0000
Epoch 90/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 413011424.0000 - val_loss: 1045063936.0000
Epoch 91/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 381873920.0000 - val_loss: 720471552.0000
Epoch 92/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 348099488.0000 - val_loss: 869306048.0000
Epoch 93/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 333139232.0000 - val_loss: 733128448.0000
Epoch 94/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 400332512.0000 - val_loss: 932493056.0000
Epoch 95/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 319299648.0000 - val_loss: 1443143680.0000
Epoch 96/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 471843840.0000 - val_loss: 749771712.0000
Epoch 97/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 357254976.0000 - val_loss: 753285696.0000
Epoch 98/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 339555776.0000 - val_loss: 827456640.0000
Epoch 99/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 317066528.0000 - val_loss: 730938944.0000
Epoch 100/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 330861440.0000 - val_loss: 1361253632.0000
Epoch 101/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 391421472.0000 - val_loss: 712014080.0000
Epoch 102/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 408656000.0000 - val_loss: 740717440.0000
Epoch 103/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 304845408.0000 - val_loss: 769180672.0000
Epoch 104/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 370018624.0000 - val_loss: 842748096.0000
Epoch 105/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 341104352.0000 - val_loss: 1195997056.0000
Epoch 106/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 372858432.0000 - val_loss: 1097339520.0000
Epoch 107/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 358171808.0000 - val_loss: 719440768.0000
Epoch 108/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 357025600.0000 - val_loss: 753393408.0000
Epoch 109/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 458019392.0000 - val_loss: 1207032448.0000
Epoch 110/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 449081888.0000 - val_loss: 720984896.0000
Epoch 111/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 343670240.0000 - val_loss: 713313344.0000
Epoch 112/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 429914368.0000 - val_loss: 720303744.0000
Epoch 113/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 341100192.0000 - val_loss: 702649984.0000
Epoch 114/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 378007328.0000 - val_loss: 724119872.0000
Epoch 115/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 347109248.0000 - val_loss: 721729920.0000
Epoch 116/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 392667776.0000 - val_loss: 755097600.0000
Epoch 117/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 357696000.0000 - val_loss: 712147008.0000
Epoch 118/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 331721792.0000 - val_loss: 782895936.0000
Epoch 119/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 489367264.0000 - val_loss: 905332480.0000
Epoch 120/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 347369760.0000 - val_loss: 693100736.0000
Epoch 121/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 358923200.0000 - val_loss: 704931264.0000
Epoch 122/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 424691872.0000 - val_loss: 837551616.0000
Epoch 123/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 371981120.0000 - val_loss: 689923712.0000
Epoch 124/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 395579456.0000 - val_loss: 724274112.0000
Epoch 125/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - loss: 350465600.0000 - val_loss: 737701440.0000
Epoch 126/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 416143040.0000 - val_loss: 709989824.0000
Epoch 127/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 319315296.0000 - val_loss: 727535616.0000
Epoch 128/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 353236480.0000 - val_loss: 692230528.0000
Epoch 129/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 353372896.0000 - val_loss: 695385344.0000
Epoch 130/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 282553632.0000 - val_loss: 747537472.0000
Epoch 131/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 380335904.0000 - val_loss: 754203648.0000
Epoch 132/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 341326816.0000 - val_loss: 690374464.0000
Epoch 133/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 341962336.0000 - val_loss: 796221184.0000
Epoch 134/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 400074464.0000 - val_loss: 688665856.0000
Epoch 135/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 471457312.0000 - val_loss: 714098048.0000
Epoch 136/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 361469376.0000 - val_loss: 685124096.0000
Epoch 137/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 436745312.0000 - val_loss: 682797888.0000
Epoch 138/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 319533120.0000 - val_loss: 820488704.0000
Epoch 139/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 331848864.0000 - val_loss: 723850560.0000
Epoch 140/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 421662112.0000 - val_loss: 778493248.0000
Epoch 141/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 348551584.0000 - val_loss: 878802112.0000
Epoch 142/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 355727552.0000 - val_loss: 692737024.0000
Epoch 143/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 339126656.0000 - val_loss: 755617984.0000
Epoch 144/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 512003200.0000 - val_loss: 715519488.0000
Epoch 145/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 353525248.0000 - val_loss: 739041344.0000
Epoch 146/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 344915040.0000 - val_loss: 695529152.0000
Epoch 147/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 356107584.0000 - val_loss: 814540736.0000
Epoch 148/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 418492416.0000 - val_loss: 777667904.0000
Epoch 149/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 346827904.0000 - val_loss: 859722432.0000
Epoch 150/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 338344512.0000 - val_loss: 873497472.0000
Epoch 151/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 353061024.0000 - val_loss: 702976640.0000
Epoch 152/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 406618528.0000 - val_loss: 778539008.0000
Epoch 153/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 366234624.0000 - val_loss: 907823744.0000
Epoch 154/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 409693728.0000 - val_loss: 707088768.0000
Epoch 155/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 325072544.0000 - val_loss: 726463680.0000
Epoch 156/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 549281344.0000 - val_loss: 706870592.0000
Epoch 157/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 392041536.0000 - val_loss: 732358528.0000
Epoch 158/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 354691072.0000 - val_loss: 759188288.0000
Epoch 159/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 390390976.0000 - val_loss: 708726976.0000
Epoch 160/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 360535104.0000 - val_loss: 690117056.0000
Epoch 161/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 339817152.0000 - val_loss: 699649856.0000
Epoch 162/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 362077248.0000 - val_loss: 744864320.0000
Epoch 163/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 338576704.0000 - val_loss: 694027008.0000
Epoch 164/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 3s 8ms/step - loss: 391244576.0000 - val_loss: 740521408.0000
Epoch 165/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 367918880.0000 - val_loss: 772712064.0000
Epoch 166/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 367891392.0000 - val_loss: 710717696.0000
Epoch 167/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 343325408.0000 - val_loss: 685982784.0000
Epoch 168/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 327884640.0000 - val_loss: 813662976.0000
Epoch 169/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 418874976.0000 - val_loss: 683239680.0000
Epoch 170/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 363789472.0000 - val_loss: 681042304.0000
Epoch 171/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 365784800.0000 - val_loss: 818057728.0000
Epoch 172/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 356374400.0000 - val_loss: 1482279040.0000
Epoch 173/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 422744224.0000 - val_loss: 749204032.0000
Epoch 174/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 347969088.0000 - val_loss: 976408320.0000
Epoch 175/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 357806048.0000 - val_loss: 758435968.0000
Epoch 176/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 324077088.0000 - val_loss: 691870208.0000
Epoch 177/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 419894528.0000 - val_loss: 712863744.0000
Epoch 178/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 484325728.0000 - val_loss: 679017792.0000
Epoch 179/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 408100832.0000 - val_loss: 719955136.0000
Epoch 180/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 352979968.0000 - val_loss: 778199104.0000
Epoch 181/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 352795264.0000 - val_loss: 722517824.0000
Epoch 182/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 327143648.0000 - val_loss: 854392320.0000
Epoch 183/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 412435136.0000 - val_loss: 701124416.0000
Epoch 184/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 383205472.0000 - val_loss: 737042880.0000
Epoch 185/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 337742080.0000 - val_loss: 685622528.0000
Epoch 186/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 376010720.0000 - val_loss: 681878208.0000
Epoch 187/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 383588544.0000 - val_loss: 683681728.0000
Epoch 188/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 332983168.0000 - val_loss: 692717376.0000
Epoch 189/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 318064160.0000 - val_loss: 736208512.0000
Epoch 190/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 379405440.0000 - val_loss: 709675136.0000
Epoch 191/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 445573504.0000 - val_loss: 691134208.0000
Epoch 192/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 401159072.0000 - val_loss: 772660480.0000
Epoch 193/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 349063296.0000 - val_loss: 987348096.0000
Epoch 194/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 392224992.0000 - val_loss: 947292480.0000
Epoch 195/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 394785600.0000 - val_loss: 710164224.0000
Epoch 196/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 350687264.0000 - val_loss: 1076940928.0000
Epoch 197/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 338589248.0000 - val_loss: 774606080.0000
Epoch 198/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - loss: 412507232.0000 - val_loss: 978161472.0000
Epoch 199/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 309782976.0000 - val_loss: 675734784.0000
Epoch 200/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 337865376.0000 - val_loss: 729686976.0000
Epoch 201/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 360395616.0000 - val_loss: 689884800.0000
Epoch 202/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 367188736.0000 - val_loss: 760137600.0000
Epoch 203/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 356923936.0000 - val_loss: 684043328.0000
Epoch 204/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 402666240.0000 - val_loss: 695964864.0000
Epoch 205/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 337634272.0000 - val_loss: 682254208.0000
Epoch 206/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 397304640.0000 - val_loss: 709653824.0000
Epoch 207/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 418621344.0000 - val_loss: 1019583808.0000
Epoch 208/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 326441504.0000 - val_loss: 786788288.0000
Epoch 209/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 719436160.0000 - val_loss: 737649024.0000
Epoch 210/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 406236320.0000 - val_loss: 846126272.0000
Epoch 211/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 341224704.0000 - val_loss: 845724736.0000
Epoch 212/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 442152288.0000 - val_loss: 701604480.0000
Epoch 213/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 338277184.0000 - val_loss: 684955328.0000
Epoch 214/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 445776352.0000 - val_loss: 669802752.0000
Epoch 215/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 358180000.0000 - val_loss: 715812032.0000
Epoch 216/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 422238752.0000 - val_loss: 701773632.0000
Epoch 217/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 321866208.0000 - val_loss: 673855936.0000
Epoch 218/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 393520224.0000 - val_loss: 692240896.0000
Epoch 219/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 367629408.0000 - val_loss: 925769088.0000
Epoch 220/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 370751584.0000 - val_loss: 745976448.0000
Epoch 221/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 406338368.0000 - val_loss: 658888064.0000
Epoch 222/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 387554816.0000 - val_loss: 675398272.0000
Epoch 223/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 372827360.0000 - val_loss: 695187200.0000
Epoch 224/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 426353056.0000 - val_loss: 662470208.0000
Epoch 225/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 338804672.0000 - val_loss: 692164544.0000
Epoch 226/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 364825152.0000 - val_loss: 664319360.0000
Epoch 227/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 341856832.0000 - val_loss: 655524672.0000
Epoch 228/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 336951360.0000 - val_loss: 754824896.0000
Epoch 229/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 385400544.0000 - val_loss: 673609792.0000
Epoch 230/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 368269600.0000 - val_loss: 711626304.0000
Epoch 231/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 365181408.0000 - val_loss: 663064512.0000
Epoch 232/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 396501216.0000 - val_loss: 700268544.0000
Epoch 233/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 368982176.0000 - val_loss: 687458112.0000
Epoch 234/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 338120896.0000 - val_loss: 672105024.0000
Epoch 235/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 355861664.0000 - val_loss: 678885952.0000
Epoch 236/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 330874560.0000 - val_loss: 669087552.0000
Epoch 237/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 347505248.0000 - val_loss: 910101504.0000
Epoch 238/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 360539936.0000 - val_loss: 721408576.0000
Epoch 239/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 431605344.0000 - val_loss: 659305024.0000
Epoch 240/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 364833856.0000 - val_loss: 689397376.0000
Epoch 241/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 341386592.0000 - val_loss: 651447296.0000
Epoch 242/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 344382784.0000 - val_loss: 675018304.0000
Epoch 243/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 362960704.0000 - val_loss: 998005248.0000
Epoch 244/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 338512928.0000 - val_loss: 705260224.0000
Epoch 245/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 390462560.0000 - val_loss: 1440343168.0000
Epoch 246/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 444509568.0000 - val_loss: 688372736.0000
Epoch 247/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 303536288.0000 - val_loss: 662463744.0000
Epoch 248/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 376142144.0000 - val_loss: 667582976.0000
Epoch 249/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 330277280.0000 - val_loss: 704775040.0000
Epoch 250/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 336296384.0000 - val_loss: 1115796224.0000
Epoch 251/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 408449056.0000 - val_loss: 722449408.0000
Epoch 252/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 426338976.0000 - val_loss: 646875264.0000
Epoch 253/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 357671104.0000 - val_loss: 686122496.0000
Epoch 254/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 378560736.0000 - val_loss: 835428608.0000
Epoch 255/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 353206976.0000 - val_loss: 907811200.0000
Epoch 256/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 384608608.0000 - val_loss: 858949952.0000
Epoch 257/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 364639168.0000 - val_loss: 654878464.0000
Epoch 258/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 310914944.0000 - val_loss: 809851072.0000
Epoch 259/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 384656032.0000 - val_loss: 773962688.0000
Epoch 260/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 363138112.0000 - val_loss: 658517760.0000
Epoch 261/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 6ms/step - loss: 326860032.0000 - val_loss: 801006400.0000
Epoch 262/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 388740256.0000 - val_loss: 645187456.0000
Epoch 263/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 9ms/step - loss: 397136480.0000 - val_loss: 703780544.0000
Epoch 264/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 318700800.0000 - val_loss: 646817152.0000
Epoch 265/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 420308000.0000 - val_loss: 697550656.0000
Epoch 266/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 331555392.0000 - val_loss: 684124160.0000
Epoch 267/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 409855904.0000 - val_loss: 682219584.0000
Epoch 268/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 335213600.0000 - val_loss: 777894208.0000
Epoch 269/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 3s 10ms/step - loss: 384101632.0000 - val_loss: 655949824.0000
Epoch 270/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 312277568.0000 - val_loss: 664096704.0000
Epoch 271/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 375121568.0000 - val_loss: 703564032.0000
Epoch 272/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 385208288.0000 - val_loss: 767235200.0000
Epoch 273/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 371770304.0000 - val_loss: 664945024.0000
Epoch 274/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 317815488.0000 - val_loss: 689597120.0000
Epoch 275/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 299811168.0000 - val_loss: 649736192.0000
Epoch 276/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 311657088.0000 - val_loss: 711385728.0000
Epoch 277/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 471001312.0000 - val_loss: 640376448.0000
Epoch 278/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 339111904.0000 - val_loss: 642152832.0000
Epoch 279/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 306909280.0000 - val_loss: 679195904.0000
Epoch 280/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 509372192.0000 - val_loss: 673807488.0000
Epoch 281/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 518288608.0000 - val_loss: 663554816.0000
Epoch 282/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 345838464.0000 - val_loss: 657347968.0000
Epoch 283/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 367472320.0000 - val_loss: 684575104.0000
Epoch 284/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 334085600.0000 - val_loss: 1044576448.0000
Epoch 285/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 294990304.0000 - val_loss: 649110848.0000
Epoch 286/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 368626656.0000 - val_loss: 790408960.0000
Epoch 287/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 407730816.0000 - val_loss: 708651904.0000
Epoch 288/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 328701600.0000 - val_loss: 777433728.0000
Epoch 289/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 341624608.0000 - val_loss: 745819328.0000
Epoch 290/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 364799680.0000 - val_loss: 711894336.0000
Epoch 291/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 385281632.0000 - val_loss: 685521664.0000
Epoch 292/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 317256480.0000 - val_loss: 642028544.0000
Epoch 293/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 295186208.0000 - val_loss: 689523584.0000
Epoch 294/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 431704384.0000 - val_loss: 688071616.0000
Epoch 295/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 330948800.0000 - val_loss: 684069120.0000
Epoch 296/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 327046816.0000 - val_loss: 844259584.0000
Epoch 297/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 313673312.0000 - val_loss: 666701632.0000
Epoch 298/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 381468448.0000 - val_loss: 693603328.0000
Epoch 299/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 7ms/step - loss: 310009024.0000 - val_loss: 648956864.0000
Epoch 300/300
259/259 ━━━━━━━━━━━━━━━━━━━━ 2s 8ms/step - loss: 398648224.0000 - val_loss: 686519040.0000
{'loss': [455427552.0, 435829248.0, 432242368.0, 410840608.0, 430319296.0, 420624032.0, 438394656.0, 413609280.0, 440838976.0, 401259968.0, 449632160.0, 390243968.0, 424622112.0, 444255232.0, 422789888.0, 409819840.0, 395853024.0, 420221760.0, 413232064.0, 423344640.0, 422386816.0, 431448064.0, 436476704.0, 406807328.0, 397749184.0, 404231904.0, 428980832.0, 421884128.0, 419475104.0, 390038784.0, 408192416.0, 403624384.0, 422091104.0, 402899712.0, 411341728.0, 383785440.0, 402974624.0, 382023520.0, 424911424.0, 424104704.0, 404653024.0, 387147616.0, 422966880.0, 428390976.0, 398131680.0, 383082400.0, 381563552.0, 424395232.0, 388915648.0, 390738304.0, 381552832.0, 412607040.0, 405860000.0, 399691968.0, 413273408.0, 397228640.0, 393863904.0, 389618464.0, 387549184.0, 407193184.0, 408460064.0, 378277920.0, 378771008.0, 384503680.0, 375078336.0, 401400544.0, 357662944.0, 390153440.0, 387679264.0, 406858432.0, 412006784.0, 382844320.0, 397561216.0, 403756992.0, 400884352.0, 390753024.0, 388871360.0, 386103584.0, 404010752.0, 384652128.0, 406067488.0, 403980448.0, 380404576.0, 392300448.0, 396816768.0, 395344256.0, 393935360.0, 402064032.0, 364298592.0, 391285472.0, 393797056.0, 389681440.0, 398045344.0, 380039424.0, 385268608.0, 411342880.0, 372255616.0, 386887392.0, 384900128.0, 387873952.0, 389575328.0, 393367488.0, 381852640.0, 373552320.0, 380294400.0, 389026688.0, 377844128.0, 362190080.0, 419296896.0, 374577248.0, 378730080.0, 384260512.0, 371836352.0, 374493408.0, 376624384.0, 392000288.0, 383062016.0, 389995104.0, 382201056.0, 368505632.0, 409926528.0, 370644480.0, 372939616.0, 373843616.0, 397647232.0, 383462816.0, 372577920.0, 408433632.0, 378326208.0, 363995712.0, 404390240.0, 380960192.0, 369104160.0, 365923680.0, 429170048.0, 372144352.0, 368142336.0, 390453664.0, 387280608.0, 386938336.0, 371086240.0, 368808768.0, 393055744.0, 409412352.0, 368584736.0, 370785408.0, 388829472.0, 380652640.0, 366350528.0, 384629792.0, 377474688.0, 383160736.0, 383782848.0, 381100128.0, 368054208.0, 438719104.0, 377464896.0, 402190016.0, 381036384.0, 376324800.0, 368586560.0, 385470304.0, 365334528.0, 384747776.0, 376376416.0, 381543648.0, 381439296.0, 361306592.0, 365586208.0, 378083136.0, 383727168.0, 401996512.0, 376538176.0, 367589216.0, 374817824.0, 368123968.0, 365959008.0, 370375456.0, 379206720.0, 373433152.0, 377546688.0, 373926400.0, 368130016.0, 381989504.0, 381471648.0, 363804768.0, 369295616.0, 372198464.0, 361678368.0, 379985408.0, 389663680.0, 388210016.0, 393825088.0, 367133664.0, 366115968.0, 402333664.0, 367964768.0, 384075360.0, 367684288.0, 370981344.0, 363863520.0, 357130368.0, 368262784.0, 382662496.0, 362673472.0, 368070592.0, 376418144.0, 353992864.0, 605341568.0, 367740992.0, 364535936.0, 370093152.0, 366451040.0, 411554848.0, 359488704.0, 362609568.0, 343354016.0, 378581568.0, 373143104.0, 371050976.0, 376469152.0, 382019328.0, 360117152.0, 372631776.0, 372809824.0, 376081760.0, 349011552.0, 358569824.0, 370456576.0, 378130464.0, 353029728.0, 359292416.0, 358567296.0, 383068000.0, 380354560.0, 361525568.0, 363422560.0, 369227008.0, 365772928.0, 367399424.0, 376246528.0, 364809792.0, 366084096.0, 359479552.0, 367570304.0, 379894848.0, 365306112.0, 370708224.0, 365550272.0, 380782272.0, 377873600.0, 354799296.0, 369775264.0, 353128000.0, 362781888.0, 351702816.0, 359556608.0, 351537504.0, 356658944.0, 369208768.0, 371256928.0, 364894464.0, 358049728.0, 353526240.0, 372586912.0, 350198048.0, 369408352.0, 374641312.0, 384004704.0, 398743776.0, 356518720.0, 366522112.0, 368737152.0, 352072864.0, 361838080.0, 367437856.0, 368007072.0, 370023328.0, 347320832.0, 381596192.0, 377748384.0, 351924352.0, 383790336.0, 373728608.0, 352094144.0, 350757216.0, 367284512.0, 366129312.0, 344895104.0, 357921376.0, 361856480.0, 370028352.0, 361763648.0, 376602464.0, 352277248.0, 360021600.0, 354662336.0, 371398880.0, 356746784.0, 370958848.0], 'val_loss': [892461056.0, 853235328.0, 1099082240.0, 934190080.0, 844846080.0, 1352435328.0, 852462720.0, 980103232.0, 866708544.0, 924335168.0, 834286336.0, 852768576.0, 950470912.0, 867789632.0, 827841024.0, 984758016.0, 799164032.0, 847518464.0, 807851328.0, 1258063488.0, 1334589824.0, 1272330880.0, 877872512.0, 805933632.0, 863433472.0, 808496128.0, 1083551616.0, 856750720.0, 805061888.0, 1300659584.0, 775836672.0, 803550208.0, 818053952.0, 792871872.0, 773162496.0, 770067456.0, 792690688.0, 808338304.0, 851775488.0, 967329344.0, 829463168.0, 852967488.0, 908776896.0, 832713664.0, 810450816.0, 835592320.0, 1123609472.0, 797550656.0, 827139648.0, 819812096.0, 850016640.0, 800847296.0, 1118923904.0, 751954752.0, 956748416.0, 788656448.0, 903412416.0, 935675584.0, 742235328.0, 809700224.0, 827263040.0, 830557248.0, 878112000.0, 839070912.0, 748227008.0, 744857472.0, 747212416.0, 755497280.0, 1322627200.0, 999164480.0, 1008469696.0, 802410624.0, 1106876544.0, 804427456.0, 807398336.0, 729133888.0, 874364992.0, 719350016.0, 723441216.0, 1144090368.0, 721575296.0, 746955008.0, 745422656.0, 871384960.0, 823464896.0, 966147008.0, 1184788224.0, 721684480.0, 1016027008.0, 1045063936.0, 720471552.0, 869306048.0, 733128448.0, 932493056.0, 1443143680.0, 749771712.0, 753285696.0, 827456640.0, 730938944.0, 1361253632.0, 712014080.0, 740717440.0, 769180672.0, 842748096.0, 1195997056.0, 1097339520.0, 719440768.0, 753393408.0, 1207032448.0, 720984896.0, 713313344.0, 720303744.0, 702649984.0, 724119872.0, 721729920.0, 755097600.0, 712147008.0, 782895936.0, 905332480.0, 693100736.0, 704931264.0, 837551616.0, 689923712.0, 724274112.0, 737701440.0, 709989824.0, 727535616.0, 692230528.0, 695385344.0, 747537472.0, 754203648.0, 690374464.0, 796221184.0, 688665856.0, 714098048.0, 685124096.0, 682797888.0, 820488704.0, 723850560.0, 778493248.0, 878802112.0, 692737024.0, 755617984.0, 715519488.0, 739041344.0, 695529152.0, 814540736.0, 777667904.0, 859722432.0, 873497472.0, 702976640.0, 778539008.0, 907823744.0, 707088768.0, 726463680.0, 706870592.0, 732358528.0, 759188288.0, 708726976.0, 690117056.0, 699649856.0, 744864320.0, 694027008.0, 740521408.0, 772712064.0, 710717696.0, 685982784.0, 813662976.0, 683239680.0, 681042304.0, 818057728.0, 1482279040.0, 749204032.0, 976408320.0, 758435968.0, 691870208.0, 712863744.0, 679017792.0, 719955136.0, 778199104.0, 722517824.0, 854392320.0, 701124416.0, 737042880.0, 685622528.0, 681878208.0, 683681728.0, 692717376.0, 736208512.0, 709675136.0, 691134208.0, 772660480.0, 987348096.0, 947292480.0, 710164224.0, 1076940928.0, 774606080.0, 978161472.0, 675734784.0, 729686976.0, 689884800.0, 760137600.0, 684043328.0, 695964864.0, 682254208.0, 709653824.0, 1019583808.0, 786788288.0, 737649024.0, 846126272.0, 845724736.0, 701604480.0, 684955328.0, 669802752.0, 715812032.0, 701773632.0, 673855936.0, 692240896.0, 925769088.0, 745976448.0, 658888064.0, 675398272.0, 695187200.0, 662470208.0, 692164544.0, 664319360.0, 655524672.0, 754824896.0, 673609792.0, 711626304.0, 663064512.0, 700268544.0, 687458112.0, 672105024.0, 678885952.0, 669087552.0, 910101504.0, 721408576.0, 659305024.0, 689397376.0, 651447296.0, 675018304.0, 998005248.0, 705260224.0, 1440343168.0, 688372736.0, 662463744.0, 667582976.0, 704775040.0, 1115796224.0, 722449408.0, 646875264.0, 686122496.0, 835428608.0, 907811200.0, 858949952.0, 654878464.0, 809851072.0, 773962688.0, 658517760.0, 801006400.0, 645187456.0, 703780544.0, 646817152.0, 697550656.0, 684124160.0, 682219584.0, 777894208.0, 655949824.0, 664096704.0, 703564032.0, 767235200.0, 664945024.0, 689597120.0, 649736192.0, 711385728.0, 640376448.0, 642152832.0, 679195904.0, 673807488.0, 663554816.0, 657347968.0, 684575104.0, 1044576448.0, 649110848.0, 790408960.0, 708651904.0, 777433728.0, 745819328.0, 711894336.0, 685521664.0, 642028544.0, 689523584.0, 688071616.0, 684069120.0, 844259584.0, 666701632.0, 693603328.0, 648956864.0, 686519040.0]}
In [252]:
# Evaluate the model on the validation data
val_loss = model.evaluate(X_input_reshaped, y)
print(f"Validation Loss: {val_loss}")

# Make predictions (for demonstration)
# Make predictions on the entire dataset
predictions = model.predict(X_input_reshaped)

# Flatten the predictions array for plotting
predictions = predictions.flatten()
print("Predictions for the first 5 samples:")
print(predictions[:5])
323/323 ━━━━━━━━━━━━━━━━━━━━ 1s 2ms/step - loss: 229282480.0000
Validation Loss: 398668928.0
323/323 ━━━━━━━━━━━━━━━━━━━━ 1s 3ms/step
Predictions for the first 5 samples:
[  454.4701  1308.0511  5184.27   13436.633  33832.6   ]
In [253]:
# Calculate performance metrics
mse = mean_squared_error(y, predictions)
mae = mean_absolute_error(y, predictions)
print(f"Mean Squared Error: {mse}")
print(f"Mean Absolute Error: {mae}")
# Calculate R-squared
r2 = r2_score(y, predictions)
print(f"R-squared: {r2}")
Mean Squared Error: 398668756.2073146
Mean Absolute Error: 7740.664420933097
R-squared: 0.7512471695461815
In [254]:
# Plot training & validation loss values
plt.figure(figsize=(14, 7))

# Loss Plot
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.title('Model Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss (MSE)')
plt.legend(loc='upper right')
plt.grid(True)
plt.show()
No description has been provided for this image
In [255]:
# Plot Actual vs Predicted values
plt.figure(figsize=(14, 7))
plt.scatter(y, predictions, label='Predicted vs Actual', color='blue', alpha=0.5)
plt.plot([min(y), max(y)], [min(y), max(y)], color='red', linestyle='--', label='Ideal Fit')
plt.xlim(-1000, 400000)  # Adjust the values as needed
plt.ylim(-1000, 400000)
plt.title('Actual vs Predicted Values')
plt.xlabel('Actual Values')
plt.ylabel('Predicted Values')
plt.legend(loc='upper left')
plt.grid(True)
plt.show()
No description has been provided for this image
In [247]:
predictions.shape
Out[247]:
(10324,)
In [248]:
y.shape
Out[248]:
(10324,)
In [256]:
# Plotting the first 100 predictions against the true values for better visibility
plt.figure(figsize=(20, 5))
plt.plot(y[:200], label='Actual')
plt.plot(predictions[:200], label='Predicted', alpha=0.7)
plt.title('Actual vs. Predicted Values')
plt.xlabel('Sample Index')
plt.ylabel('Demand')
plt.legend()
plt.show()
No description has been provided for this image

Predicting Risk Factor of Shipment¶

Balancing Dataset using SMOT for Risk Factor¶

In [ ]:
# Import necessary libraries
from imblearn.over_sampling import SMOTE
from collections import Counter
In [261]:
# Select relevant features for the ConvLSTM model
features = ['Project Code','Month','Country', 'Shipment Mode', 'Product Group', 'Line_Item_Quantity', 'Line_Item_Value', 
            'Manufacturing Site','Item Description','Weight_Kilograms','Freight_Cost_USD']


# Prepare the final dataset for modeling
features_data = df[features].copy()


label_encoders = {col: LabelEncoder().fit(features_data[col]) for col in features}
for col, encoder in label_encoders.items():
    features_data[col] = encoder.transform(features_data[col])
    
#features = ['Month', 'Line_Item_Quantity']
# Separate features and target variable
X = features_data
y = df['Risk_Factor']

# Check the original distribution
print("Original Risk_Factor distribution:", Counter(y))

# Apply SMOTE
smote = SMOTE(random_state=42)
X_smote, y_smote = smote.fit_resample(X, y)

# Check the new distribution after SMOTE
print("Balanced Risk_Factor distribution:", Counter(y_smote))

# Plot the balanced distribution
plt.figure(figsize=(8, 6))
plt.bar(Counter(y_smote).keys(), Counter(y_smote).values(), color='lightgreen')
plt.xlabel('Risk Factor')
plt.ylabel('Frequency')
plt.title('Distribution of Risk Factor After Applying SMOTE')
plt.show()
Original Risk_Factor distribution: Counter({'L': 8092, 'M': 2097, 'H': 135})
Balanced Risk_Factor distribution: Counter({'L': 8092, 'M': 8092, 'H': 8092})
No description has been provided for this image
In [262]:
X_smote.shape
Out[262]:
(24276, 11)
In [263]:
y_smote.shape
Out[263]:
(24276,)

classification of Risk_Factor (RF,DT,GB,SVM)¶

In [ ]:
# Import necessary libraries
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.metrics import classification_report, accuracy_score, f1_score, recall_score, roc_auc_score, confusion_matrix
import seaborn as sns
In [280]:
# Split the data into training and testing sets (80-20 split)
X_train, X_test, y_train, y_test = train_test_split(X_smote, y_smote, test_size=0.2, random_state=42, stratify=y_smote)

# Initialize the classifiers
classifiers = {
    "Random Forest": RandomForestClassifier(random_state=42),
    "Decision Tree": DecisionTreeClassifier(random_state=42),
    "Gradient Boosting": GradientBoostingClassifier(random_state=42),
    "SVM": SVC(probability=True, random_state=42)
}

# Initialize a list to store evaluation metrics
metrics_list = []

# Train each model, make predictions, and evaluate
for name, clf in classifiers.items():
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)
    
    # Only calculate AUC if predict_proba is available (e.g., skip for SVM without probabilities)
    if hasattr(clf, "predict_proba"):
        y_proba = clf.predict_proba(X_test)
        try:
            auc = roc_auc_score(y_test, y_proba, multi_class='ovr', average='weighted')
        except:
            auc = 'N/A'
    else:
        auc = 'N/A'
    
    # Calculate metrics
    accuracy = accuracy_score(y_test, y_pred)
    f1 = f1_score(y_test, y_pred, average='weighted')
    recall = recall_score(y_test, y_pred, average='weighted')
    
    # Append metrics to the list
    metrics_list.append({
        "Model": name,
        "Accuracy": accuracy,
        "F1-Score": f1,
        "Recall": recall,
        "AUC": auc
    })
    
    # Print classification report
    print(f"Classification Report for {name}:\n")
    print(classification_report(y_test, y_pred))
    print("\n" + "="*60 + "\n")

# Create a confusion matrix
    conf_matrix = confusion_matrix(y_test, y_pred)
    
    # Plot the confusion matrix
    plt.figure(figsize=(8, 6))
    sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', xticklabels=clf.classes_, yticklabels=clf.classes_)
    plt.xlabel('Predicted')
    plt.ylabel('Actual')
    plt.title(f'Confusion Matrix: Actual vs Predicted ({name})')
    plt.show()

# Convert the list to a DataFrame
metrics = pd.DataFrame(metrics_list)

# Display the evaluation metrics
print(metrics)
Classification Report for Random Forest:

              precision    recall  f1-score   support

           H       0.97      1.00      0.98      1618
           L       0.93      0.91      0.92      1619
           M       0.91      0.90      0.91      1619

    accuracy                           0.94      4856
   macro avg       0.94      0.94      0.94      4856
weighted avg       0.94      0.94      0.94      4856


============================================================

No description has been provided for this image
Classification Report for Decision Tree:

              precision    recall  f1-score   support

           H       0.93      0.96      0.95      1618
           L       0.87      0.87      0.87      1619
           M       0.84      0.81      0.83      1619

    accuracy                           0.88      4856
   macro avg       0.88      0.88      0.88      4856
weighted avg       0.88      0.88      0.88      4856


============================================================

No description has been provided for this image
Classification Report for Gradient Boosting:

              precision    recall  f1-score   support

           H       0.83      0.93      0.88      1618
           L       0.85      0.85      0.85      1619
           M       0.81      0.70      0.75      1619

    accuracy                           0.83      4856
   macro avg       0.83      0.83      0.83      4856
weighted avg       0.83      0.83      0.83      4856


============================================================

No description has been provided for this image
Classification Report for SVM:

              precision    recall  f1-score   support

           H       0.48      0.51      0.49      1618
           L       0.47      0.44      0.45      1619
           M       0.44      0.44      0.44      1619

    accuracy                           0.46      4856
   macro avg       0.46      0.46      0.46      4856
weighted avg       0.46      0.46      0.46      4856


============================================================

No description has been provided for this image
               Model  Accuracy  F1-Score    Recall       AUC
0      Random Forest  0.936779  0.936459  0.936779  0.988568
1      Decision Tree  0.882002  0.881336  0.882002  0.911465
2  Gradient Boosting  0.828871  0.826205  0.828871  0.942757
3                SVM  0.461903  0.461508  0.461903  0.669977

Predicting Shipment Mode¶

Balancing Dataset using SMOTE for Shipment Mode¶

In [281]:
df['Shipment Mode'].value_counts()
Out[281]:
Shipment Mode
Air            6473
Truck          2830
Air Charter     650
Ocean           371
Name: count, dtype: int64
In [283]:
# Select relevant features for the ConvLSTM model
features = ['Project Code','Month','Country', 'Shipment Mode', 'Product Group', 'Line_Item_Quantity', 'Line_Item_Value', 
            'Manufacturing Site','Item Description','Weight_Kilograms','Freight_Cost_USD']


# Prepare the final dataset for modeling
features_data = df[features].copy()


label_encoders = {col: LabelEncoder().fit(features_data[col]) for col in features}
for col, encoder in label_encoders.items():
    features_data[col] = encoder.transform(features_data[col])
    

# Separate features and target variable
X = features_data.drop(columns=['Shipment Mode'])  # Drop irrelevant columns
y = df['Shipment Mode']

# Check the original distribution
print("Original Shipment Mode distribution:", Counter(y))

# Apply SMOTE
smote = SMOTE(random_state=42)
X_smote, y_smote = smote.fit_resample(X, y)

# Check the new distribution after SMOTE
print("Balanced Shipment Mode distribution:", Counter(y_smote))

# Plot the balanced distribution
plt.figure(figsize=(8, 6))
plt.bar(Counter(y_smote).keys(), Counter(y_smote).values(), color='lightgreen')
plt.xlabel('Shipment Mode')
plt.ylabel('Frequency')
plt.title('Distribution of Shipment Mode After Applying SMOTE')
plt.show()
Original Shipment Mode distribution: Counter({'Air': 6473, 'Truck': 2830, 'Air Charter': 650, 'Ocean': 371})
Balanced Shipment Mode distribution: Counter({'Air': 6473, 'Truck': 6473, 'Air Charter': 6473, 'Ocean': 6473})
No description has been provided for this image

classification of Shipment Mode (RF,DT,GB,SVM)¶

In [285]:
import time


# Split the data into training and testing sets (80-20 split)
X_train, X_test, y_train, y_test = train_test_split(X_smote, y_smote, test_size=0.2, random_state=42, stratify=y_smote)

# Initialize the classifiers
classifiers = {
    "Random Forest": RandomForestClassifier(random_state=42),
    "Decision Tree": DecisionTreeClassifier(random_state=42),
    "Gradient Boosting": GradientBoostingClassifier(random_state=42),
    "SVM": SVC(probability=True, random_state=42)
}

# Initialize a list to store evaluation metrics
metrics_list = []

# Train each model, make predictions, evaluate, and measure execution time
for name, clf in classifiers.items():
    start_time = time.time()  # Start the timer
    clf.fit(X_train, y_train)
    end_time = time.time()  # End the timer
    
    y_pred = clf.predict(X_test)
    
    # Only calculate AUC if predict_proba is available (e.g., skip for SVM without probabilities)
    if hasattr(clf, "predict_proba"):
        y_proba = clf.predict_proba(X_test)
        try:
            auc = roc_auc_score(y_test, y_proba, multi_class='ovr', average='weighted')
        except:
            auc = 'N/A'
    else:
        auc = 'N/A'
    
    # Calculate metrics
    accuracy = accuracy_score(y_test, y_pred)
    f1 = f1_score(y_test, y_pred, average='weighted')
    recall = recall_score(y_test, y_pred, average='weighted')
    
    # Calculate execution time
    execution_time = end_time - start_time
    
    # Append metrics to the list
    metrics_list.append({
        "Model": name,
        "Accuracy": accuracy,
        "F1-Score": f1,
        "Recall": recall,
        "AUC": auc,
        "Execution Time (s)": execution_time
    })
    
    # Print classification report
    print(f"Classification Report for {name}:\n")
    print(classification_report(y_test, y_pred))
    print("\n" + "="*60 + "\n")

# Convert the list to a DataFrame
metrics = pd.DataFrame(metrics_list)

# Display the evaluation metrics, including execution time
print(metrics)

# Plot the correlation matrix (confusion matrix) for each classifier
for name, clf in classifiers.items():
    y_pred = clf.predict(X_test)
    
    # Create a confusion matrix
    conf_matrix = confusion_matrix(y_test, y_pred)
    
    # Plot the confusion matrix
    plt.figure(figsize=(8, 6))
    sns.heatmap(conf_matrix, annot=True, fmt='d', cmap='Blues', xticklabels=clf.classes_, yticklabels=clf.classes_)
    plt.xlabel('Predicted')
    plt.ylabel('Actual')
    plt.title(f'Confusion Matrix: Actual vs Predicted ({name})')
    plt.show()
Classification Report for Random Forest:

              precision    recall  f1-score   support

         Air       0.94      0.92      0.93      1295
 Air Charter       0.98      0.99      0.98      1294
       Ocean       0.97      0.99      0.98      1295
       Truck       0.93      0.93      0.93      1295

    accuracy                           0.96      5179
   macro avg       0.96      0.96      0.96      5179
weighted avg       0.96      0.96      0.96      5179


============================================================

Classification Report for Decision Tree:

              precision    recall  f1-score   support

         Air       0.90      0.90      0.90      1295
 Air Charter       0.97      0.97      0.97      1294
       Ocean       0.95      0.96      0.96      1295
       Truck       0.90      0.89      0.89      1295

    accuracy                           0.93      5179
   macro avg       0.93      0.93      0.93      5179
weighted avg       0.93      0.93      0.93      5179


============================================================

Classification Report for Gradient Boosting:

              precision    recall  f1-score   support

         Air       0.90      0.87      0.88      1295
 Air Charter       0.96      0.97      0.97      1294
       Ocean       0.93      0.92      0.93      1295
       Truck       0.86      0.89      0.88      1295

    accuracy                           0.91      5179
   macro avg       0.91      0.91      0.91      5179
weighted avg       0.91      0.91      0.91      5179


============================================================

Classification Report for SVM:

              precision    recall  f1-score   support

         Air       0.52      0.64      0.57      1295
 Air Charter       0.50      0.64      0.56      1294
       Ocean       0.55      0.56      0.55      1295
       Truck       0.53      0.25      0.34      1295

    accuracy                           0.52      5179
   macro avg       0.52      0.52      0.51      5179
weighted avg       0.52      0.52      0.51      5179


============================================================

               Model  Accuracy  F1-Score    Recall       AUC  \
0      Random Forest  0.956169  0.955987  0.956169  0.995814   
1      Decision Tree  0.927592  0.927500  0.927592  0.951726   
2  Gradient Boosting  0.912918  0.912892  0.912918  0.985554   
3                SVM  0.521915  0.506803  0.521915  0.778360   

   Execution Time (s)  
0           10.289368  
1            0.428003  
2           31.802354  
3          136.726381  
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
In [ ]: